diff --git a/markdown/api-reference.mdx b/markdown/api-reference.mdx index bcbb53c..842aa2e 100644 --- a/markdown/api-reference.mdx +++ b/markdown/api-reference.mdx @@ -409,6 +409,76 @@ curl -X POST http://localhost:8080/v1/chat/completions \ }' ``` +### Vision/Multimodal Support + +For vision-capable models, you can include images in your requests using either HTTP URLs or base64-encoded data URLs. Vision support must be enabled with `ENABLE_VISION=true` in your configuration. + +#### Using HTTP URL + +```bash +curl -X POST http://localhost:8080/v1/chat/completions \ + -H "Content-Type: application/json" \ + -d '{ + "model": "anthropic/claude-3-5-sonnet-20241022", + "messages": [ + { + "role": "user", + "content": [ + { + "type": "text", + "text": "What is in this image?" + }, + { + "type": "image_url", + "image_url": { + "url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg" + } + } + ] + } + ] + }' +``` + +#### Using Base64 Data URL + +```bash +curl -X POST http://localhost:8080/v1/chat/completions \ + -H "Content-Type: application/json" \ + -d '{ + "model": "anthropic/claude-3-5-sonnet-20241022", + "messages": [ + { + "role": "user", + "content": [ + { + "type": "text", + "text": "Describe this image" + }, + { + "type": "image_url", + "image_url": { + "url": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAADUlEQVR42mP8z8DwHwAFBQIAX8jx0gAAAABJRU5ErkJggg==" + } + } + ] + } + ] + }' +``` + +**Supported Providers with Vision:** + +- OpenAI (GPT-4o, GPT-5, GPT-4.1, GPT-4 Turbo) +- Anthropic (Claude 3, Claude 4, Claude 4.5 Sonnet, Claude 4.5 Haiku) +- Google (Gemini 2.5) +- Cohere (Command A Vision, Aya Vision) +- Ollama (LLaVA, Llama 4, Llama 3.2 Vision) +- Groq (vision models) +- Mistral (Pixtral) + +**Note:** When `ENABLE_VISION=false` (default), requests containing image content will be rejected even if the model supports vision. This is disabled by default for performance and security reasons. + ### Direct API Proxy For more advanced use cases, you can proxy requests directly to the provider's API: diff --git a/markdown/configuration.mdx b/markdown/configuration.mdx index a4c1a63..8fb3c3a 100644 --- a/markdown/configuration.mdx +++ b/markdown/configuration.mdx @@ -21,6 +21,11 @@ Environment variables are the primary method for configuring Inference Gateway. +When `ENABLE_VISION` is set to `true`, Inference Gateway enables vision/multimodal capabilities, allowing you to send images alongside text in chat completion requests. When disabled (default), requests with image content will be rejected even if the provider and model support vision. This is disabled by default for performance and security reasons. + When `ENABLE_TELEMETRY` is set to `true`, Inference Gateway exposes a `/metrics` endpoint for Prometheus scraping and generates distributed traces that can be collected by OpenTelemetry collectors. ### OpenID Connect @@ -369,6 +376,7 @@ Here's a comprehensive example for configuring Inference Gateway in a production ```bash # General settings ENVIRONMENT=production +ENABLE_VISION=true ENABLE_TELEMETRY=true ENABLE_AUTH=true diff --git a/markdown/examples.mdx b/markdown/examples.mdx index bec3364..8595cd7 100644 --- a/markdown/examples.mdx +++ b/markdown/examples.mdx @@ -98,4 +98,75 @@ curl -X POST http://localhost:8080/v1/chat/completions \ }' ``` +### Vision/Multimodal Image Processing + +Process images with vision-capable models. First, enable vision support: + +```bash +ENABLE_VISION=true +``` + +#### Using HTTP Image URL + +```bash +curl -X POST http://localhost:8080/v1/chat/completions \ + -H "Content-Type: application/json" \ + -d '{ + "model": "anthropic/claude-3-5-sonnet-20241022", + "messages": [ + { + "role": "user", + "content": [ + { + "type": "text", + "text": "What is in this image?" + }, + { + "type": "image_url", + "image_url": { + "url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg" + } + } + ] + } + ] + }' +``` + +#### Using Base64 Data URL + +```bash +curl -X POST http://localhost:8080/v1/chat/completions \ + -H "Content-Type: application/json" \ + -d '{ + "model": "anthropic/claude-3-5-sonnet-20241022", + "messages": [ + { + "role": "user", + "content": [ + { + "type": "text", + "text": "What color is this pixel?" + }, + { + "type": "image_url", + "image_url": { + "url": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAADUlEQVR42mP8z8DwHwAFBQIAX8jx0gAAAABJRU5ErkJggg==" + } + } + ] + } + ] + }' +``` + +**Supported Vision Models:** + +- `anthropic/claude-3-5-sonnet-20241022` (Claude 4.5 Sonnet) +- `anthropic/claude-3-5-haiku-20250219` (Claude 4.5 Haiku) +- `openai/gpt-4o` +- `google/gemini-2.5-flash` +- `ollama/llava` +- And more... + For more detailed examples and use cases, check out the full [examples directory](https://github.com/inference-gateway/inference-gateway/tree/main/examples) in the GitHub repository. diff --git a/markdown/supported-providers.mdx b/markdown/supported-providers.mdx index a8e8b51..6ea9169 100644 --- a/markdown/supported-providers.mdx +++ b/markdown/supported-providers.mdx @@ -11,61 +11,153 @@ The following LLM providers are currently supported:

OpenAI

Access GPT models including GPT-3.5, GPT-4, and more.

-

Authentication: Bearer Token

-

Default URL: https://api.openai.com/v1

+
Authentication: Bearer Token
+
Default URL: https://api.openai.com/v1
+
Vision Support: ✅ Yes (GPT-4o, GPT-5, GPT-4.1, GPT-4 Turbo)
-
-

DeepSeek

-

Use DeepSeek's models for various natural language tasks.

-

Authentication: Bearer Token

-

Default URL: https://api.deepseek.com

+
+

DeepSeek

+

Use DeepSeek's models for various natural language tasks.

+
+ Authentication: Bearer Token
- -
-

Anthropic

-

Connect to Claude models for high-quality conversational AI.

-

Authentication: X-Header

-

Default URL: https://api.anthropic.com/v1

+
+ Default URL: https://api.deepseek.com
- -
-

Cohere

-

Use Cohere's models for various natural language tasks.

-

Authentication: Bearer Token

-

Default URL: https://api.cohere.com

+
+ Vision Support: ❌ No
- -
-

Groq

-

Access high-performance inference with Groq's LPU-accelerated models.

-

Authentication: Bearer Token

-

Default URL: https://api.groq.com/openai/v1

+
+ +
+

Anthropic

+

Connect to Claude models for high-quality conversational AI.

+
+ Authentication: X-Header
- -
-

Cloudflare

-

Connect to Cloudflare Workers AI for inference on various models.

-

Authentication: Bearer Token

-

Default URL: https://api.cloudflare.com/client/v4/accounts/

-

{'{ACCOUNT_ID}'}/ai

+
+ Default URL: https://api.anthropic.com/v1
- -
-

Ollama

-

Run open-source models locally or on a self-hosted server.

-

Authentication: None (optional API key)

-

Default URL: http://ollama:8080/v1

+
+ Vision Support: ✅ Yes (Claude 3, Claude 4, Claude 4.5 Sonnet, Claude 4.5 + Haiku) +
+
+ +
+

Cohere

+

Use Cohere's models for various natural language tasks.

+
+ Authentication: Bearer Token +
+
+ Default URL: https://api.cohere.com +
+
+ Vision Support: ✅ Yes (Command A Vision, Aya Vision) +
+
+ +
+

Groq

+

Access high-performance inference with Groq's LPU-accelerated models.

+
+ Authentication: Bearer Token +
+
+ Default URL: https://api.groq.com/openai/v1 +
+
+ Vision Support: ✅ Yes (vision models) +
+
+ +
+

Cloudflare

+

Connect to Cloudflare Workers AI for inference on various models.

+
+ Authentication: Bearer Token +
+
+ Default URL: https://api.cloudflare.com/client/v4/accounts/ +
+
{'{ACCOUNT_ID}'}/ai
+
+ Vision Support: ❌ No +
+
+ +
+

Ollama

+

Run open-source models locally or on a self-hosted server.

+
+ Authentication: None (optional API key)
- +
+ Default URL: http://ollama:8080/v1 +
+
+ Vision Support: ✅ Yes (LLaVA, Llama 4, Llama 3.2 Vision) +
+
+

Google

Access Google's Gemini models for text generation and understanding.

-

Authentication: Bearer Token

-

Default URL: https://generativelanguage.googleapis.com/v1

+
Authentication: Bearer Token
+
Default URL: https://generativelanguage.googleapis.com/v1
+
Vision Support: ✅ Yes (Gemini 2.5)
+## Vision/Multimodal Support + +Several providers support vision/multimodal capabilities, allowing you to process images alongside text. To use vision features, you must enable them in your configuration: + +```bash +ENABLE_VISION=true +``` + +**Note:** Vision support is disabled by default for performance and security reasons. When disabled, requests containing image content will be rejected even if the model supports vision. + +### Providers with Vision Support + +- **OpenAI**: GPT-4o, GPT-5, GPT-4.1, GPT-4 Turbo +- **Anthropic**: Claude 3, Claude 4, Claude 4.5 Sonnet, Claude 4.5 Haiku +- **Google**: Gemini 2.5 +- **Cohere**: Command A Vision, Aya Vision +- **Ollama**: LLaVA, Llama 4, Llama 3.2 Vision +- **Groq**: Vision models +- **Mistral**: Pixtral + +### Example Vision Request + +```bash +curl -X POST http://localhost:8080/v1/chat/completions \ + -H "Content-Type: application/json" \ + -d '{ + "model": "anthropic/claude-3-5-sonnet-20241022", + "messages": [ + { + "role": "user", + "content": [ + { + "type": "text", + "text": "What is in this image?" + }, + { + "type": "image_url", + "image_url": { + "url": "https://example.com/image.jpg" + } + } + ] + } + ] + }' +``` + ## Using Providers ### Provider Configuration diff --git a/public/search-index.json b/public/search-index.json index a21f176..638d83b 100644 --- a/public/search-index.json +++ b/public/search-index.json @@ -1 +1 @@ -{"10":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"15":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"16":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"17":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"20":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"30":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"40":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"47":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"50":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"60":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"70":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"80":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"123":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"200":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"333":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"400":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"401":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"408":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"429":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"500":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"502":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"503":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"504":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"1000":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"3003":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"3004":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"8080":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"8081":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"8082":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"8084":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"8085":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"1741879542":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"agenttoagent":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"a2a":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"integration":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"what":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"is":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"key":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"features":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"how":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"works":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"configuration":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"environment":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"variables":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"enable":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"protocol":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"support":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"expose":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"endpoints":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"for":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"debugging":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"and":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"agent":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"discovery":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"commaseparated":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"list":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"of":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"urls":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"note":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"replace":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"with":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"actual":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"in":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"production":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"example":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"multiple":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"agents":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"real":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"implementations":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"client":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"timeout":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"default":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"30s":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"docker":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"compose":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"available":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"google":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"calendar":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"calculator":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"weather":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"hello":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"world":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"api":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"usage":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"examples":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"single":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"interaction":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"mathematical":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"calculations":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"information":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"management":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"implementation":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"multiagent":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"coordination":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"mixed":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"creating":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"custom":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"required":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"capabilities":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"schema":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"kubernetes":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"deployment":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"best":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"practices":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"design":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"security":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"performance":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"troubleshooting":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"common":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"issues":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"not":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"discovered":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"connection":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"timeouts":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"communication":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"failures":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"future":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"enhancements":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"related":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"resources":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"the":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"inference":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"gateway":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"now":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"supports":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"enabling":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"large":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"language":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"models":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"llms":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"to":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"seamlessly":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"coordinate":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"external":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"specialized":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"this":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"powerful":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"feature":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"allows":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"access":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"utilize":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"wide":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"range":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"tools":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"services":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"through":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"standardized":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"interfaces":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"that":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"enables":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"discover":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"communicate":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"simultaneously":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"each":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"can":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"provide":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"specific":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"called":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"skills":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"llm":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"automatically":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"use":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"fulfill":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"user":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"requests":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"automatic":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"discovers":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"their":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"conversation":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"provides":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"domains":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"distributed":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"architecture":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"run":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"as":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"separate":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"scale":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"independently":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"natural":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"users":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"interact":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"naturally":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"while":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"handles":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"standardization":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"based":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"on":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"interoperability":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"mermaid":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"init":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"theme":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"base":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"themevariables":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"primarycolor":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"326ce5":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"primarytextcolor":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"fff":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"linecolor":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"5d8aa8":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"secondarycolor":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"006100":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"flowchart":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"nodespacing":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"rankspacing":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"curve":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"linear":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"graph":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"td":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"request":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"discoverbravailable":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"registry":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"coordinatebrmultiple":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"bremail":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"brcalendar":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"brcalculator":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"brweather":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"results":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"unified":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"response":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"classdef":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"fill":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"9370db":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"stroke":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"strokewidth":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"2px":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"color":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"white":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"32cd32":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"system":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"f5a800":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"black":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"class":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"when":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"makes":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"analysis":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"analyzes":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"are":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"task":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"decomposition":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"broken":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"down":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"into":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"tasks":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"or":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"sequentially":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"result":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"from":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"all":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"integrated":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"coherent":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"by":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"setting":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"these":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"bash":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"a2a_enabletrue":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"a2a_exposetrue":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"a2a_agentshttp":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"googlecalendaragent":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"http":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"yourcustomagent":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"shows":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"configure":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"helloworldagent":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"calculatoragent":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"weatheragent":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"your":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"only":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"currently":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"yaml":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"version":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"inferencegateway":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"image":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"ghcr":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"latest":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"ports":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"update":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"deepseek_api_key":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"depends_on":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"demonstration":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"reference":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"url":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"authentication":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"provider":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"chat":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"completions":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"}],"body":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"streaming":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"proxy":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"openai":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"completion":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"health":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"check":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"error":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"responses":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"advanced":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"tool":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"direct":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"openapi":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"specification":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"restful":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"interacting":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"various":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"providers":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"documents":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"formats":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"structures":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"relative":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"installation":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"localhost":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"running":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"locally":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"if":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"enabled":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"enable_authtrue":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"must":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"include":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"bearer":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"token":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"authorization":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"your_jwt_token":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"jwt":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"issued":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"configured":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"identity":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"idp":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"specified":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"openid":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"connect":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"settings":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"oidc_issuer_url":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"oidc_client_id":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"etc":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"validates":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"tokens":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"against":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"authenticate":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"get":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"across":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"v1models":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"status":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"ok":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"contenttype":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"applicationjson":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"object":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"data":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"id":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"gpt4":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"model":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"created":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"owned_by":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"served_by":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"claude3opus20240229":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"anthropic":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"meta":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"groq":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"v1modelsprovider":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"where":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"one":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"cohere":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"cloudflare":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"ollama":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"deepseek":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"gpt3":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"5turbo":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"using":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"post":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"json":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"ollamadeepseekr1":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"5b":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"name":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"messages":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"array":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"role":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"assistant":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"message":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"sender":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"content":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"hi":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"you":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"doing":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"today":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"stream":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"false":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"optional":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"they":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"re":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"generated":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"max_tokens":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"maximum":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"number":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"generate":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"type":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"function":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"string":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"description":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"parameters":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"defining":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"chatcmpl753":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"deepseekr1":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"choices":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"index":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"thinknokay":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"so":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"greeted":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"me":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"said":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"just":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"starting":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"say":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"should":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"respond":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"friendly":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"way":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"nnmaybe":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"acknowledge":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"greeting":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"offer":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"my":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"help":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"something":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"since":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"mentioned":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"working":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"math":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"problems":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"solving":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"puzzles":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"ll":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"stick":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"nni":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"want":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"make":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"sure":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"approaching":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"it":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"right":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"question":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"yet":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"but":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"see":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"more":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"them":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"maybe":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"assistance":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"responding":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"an":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"emoji":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"like":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"would":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"be":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"nice":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"nthinknnhello":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"finish_reason":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"length":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"prompt_tokens":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"completion_tokens":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"total_tokens":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"true":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"streamed":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"server":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"sent":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"events":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"sse":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"https":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"developer":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"mozilla":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"objects":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"curl":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"helpful":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"overview":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"general":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"setup":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"document":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"highlevel":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"designed":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"modular":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"extensible":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"allowing":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"easy":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"new":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"fontfamily":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"arial":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"padding":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"nodes":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"clients":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"v1chatcompletions":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"auth":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"node":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"oidc":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"ig1":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"ig2":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"ig3":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"h1":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"h2":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"h3":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"define":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"styles":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"1px":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"apply":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"following":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"diagram":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"ingress":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"border":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"subgraph":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"cluster":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"ingressapi":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"service":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"internal":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"internalclients":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"internalagents":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"pod1":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"pod2":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"pod3":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"pods":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"pod":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"pg1":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"end":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"pg2":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"pg3":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"monitoring":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"stack":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"servicemonitor":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"metrics":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"sm":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"scrapes":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"prometheus":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"grafana":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"externalproviders":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"placed":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"inside":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"k8s":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"visually":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"ext1":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"ext2":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"ext3":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"ext4":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"ext5":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"ext6":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"ext7":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"ffffff":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"externalsvc":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"monitor":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"84a392":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"internalclient":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"flow":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"keycloak":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"prerequisites":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"up":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"option":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"manual":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"configmap":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"secret":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"helm":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"obtaining":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"password":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"grant":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"testing":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"credentials":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"servicetoservice":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"making":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"authenticated":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"selfsigned":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"certificates":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"create":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"keycloaks":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"ca":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"certificate":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"mount":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"next":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"steps":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"secure":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"guide":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"focuses":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"popular":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"opensource":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"solution":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"valid":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"header":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"validated":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"applications":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"processed":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"otherwise":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"unauthorized":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"returned":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"set":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"oidc_issuer_urlhttps":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"section":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"detailed":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"integrating":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"www":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"org":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"v24":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"later":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"recommended":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"github":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"v0":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"kubectl":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"iodocstaskstools":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"shdocsintroinstall":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"chart":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"taskfile":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"devinstallation":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"complete":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"clone":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"repository":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"git":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"cd":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"deploy":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"}],"infrastructure":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"postgresql":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"deployinfrastructure":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"admin":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"console":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"local":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"username":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"tempadmin":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"output":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"previous":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"command":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"test":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"fetchaccesstoken":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"localv1models":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"manually":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"follow":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"install":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"official":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"realm":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"log":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"click":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"enter":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"go":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"save":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"page":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"confidential":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"account":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"changes":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"tab":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"copy":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"add":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"email":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"userexample":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"com":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"verified":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"temporary":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"off":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"apiversion":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"v1":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"kind":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"metadata":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"namespace":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"enable_auth":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"stringdata":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"oidc_client_secret":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"yourclientsecret":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"opaque":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"upgrade":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"createnamespace":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"envfrom":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"oci":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"cli":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"script":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"directory":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"}],"download":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"build":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"source":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"quick":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"start":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"initialize":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"interactive":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"core":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"commands":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"essential":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"infer":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"prompt":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"enabledisable":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"execution":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"manage":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"whitelist":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"safety":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"sandbox":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"file":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"structure":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"search":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"development":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"storage":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"backends":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"interface":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"shortcuts":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"userdefined":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"format":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"whitelisting":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"allowed":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"remove":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"protected":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"paths":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"approval":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"prompts":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"shell":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"aliases":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"bashrc":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"zshrc":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"workflow":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"project":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"agentdriven":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"problem":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"cicd":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"automated":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"code":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"review":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"verify":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"debug":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"mode":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"permission":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"reset":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"reinitialize":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"validate":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"logging":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"gobased":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"commandline":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"comprehensive":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"autonomous":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"extensive":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"current":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"breaking":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"expected":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"until":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"stable":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"fssl":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"raw":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"githubusercontent":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"sh":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"installdir":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"home":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"localbin":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"have":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"installed":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"binaries":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"releases":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"signed":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"cosign":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"verification":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"tui":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"imagestui":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"gif":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"creates":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"inferconfig":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"configurations":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"resource":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"launch":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"scrolling":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"keyboard":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"navigation":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"selection":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"expansion":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"execute":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"complex":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"background":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"analyze":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"codebase":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"suggest":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"improvements":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"fix":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"failing":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"tests":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"suite":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"implement":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"issue":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"operates":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"autonomously":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"planning":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"validation":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"phases":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"config":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"setmodel":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"openaigpt4":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"setsystem":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"coding":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"disable":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"exec":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"require":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"protectedpath":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"uses":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"2layer":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"precedence":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"infer_":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"prefix":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"highest":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"priority":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"line":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"flags":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"builtin":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"defaults":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"lowest":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"api_key":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"retry":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"max_attempts":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"initial_backoff_sec":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"max_backoff_sec":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"backoff_multiplier":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"safe":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"readonly":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"directories":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"tmp":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"methods":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"context":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"mcp":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"ui":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"variable":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"env":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"configmaps":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"}],"secrets":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"}],"import":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"configtable":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"flexible":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"options":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"adapt":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"needs":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"facilitate":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"apis":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"proper":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"optimal":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"suit":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"different":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"scenarios":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"}],"most":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"deployments":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"kubernetesbased":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"files":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"primary":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"method":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"configuring":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"control":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"everything":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"basic":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"providerspecific":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"rows":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"defaultvalue":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"enable_telemetry":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"opentelemetry":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"tracing":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"exposes":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"endpoint":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"scraping":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"generates":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"traces":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"collected":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"collectors":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"issuer":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"behavior":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"server_host":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"host":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"server_port":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"port":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"server_read_timeout":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"read":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"server_write_timeout":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"write":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"server_idle_timeout":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"idle":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"120s":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"server_tls_cert_path":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"tls":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"path":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"server_tls_key_path":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"strongly":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"pem":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"connects":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"thirdparty":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"client_timeout":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"connections":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"per":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"minimum":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"tls12":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"highthroughput":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"consider":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"increasing":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"pool":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"at":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"plan":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"openai_api_url":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"comv1":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"openai_api_key":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"}],"anthropic_api_url":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"anthropic_api_key":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"cohere_api_url":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"cohere_api_key":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"groq_api_url":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"comopenaiv1":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"groq_api_key":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"ollama_api_url":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"8080v1":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"ollama_api_key":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"cloudflare_api_url":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"doesnt":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"exist":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"keys":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"creation":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"certmanager":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"horizontal":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"autoscaling":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"accessing":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"upgrading":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"uninstalling":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"checking":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"sources":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"explains":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"charts":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"v3":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"there":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"two":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"deploys":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"both":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"directly":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"enabledtrue":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"before":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"deploying":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"may":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"need":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"doesn":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"dryrunclient":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"eof":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"annotations":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"shreleasename":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"inferencegatewayui":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"shreleasenamespace":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"labels":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"}],"app":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"iomanagedby":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"yourdeepseekapikey":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"classnamenginx":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"hosts":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"hostui":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"pathtypeprefix":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"certpathtotls":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"crt":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"keypathtotls":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"we":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"recommend":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"io":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"automate":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"approach":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"automates":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"issuance":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"renewal":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"already":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"repo":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"jetstack":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"jetstackcertmanager":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"crds":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"clusterissuer":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"let":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"encrypt":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"iov1":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"letsencryptprod":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"spec":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"acme":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"acmev02":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"letsencrypt":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"orgdirectory":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"youremailexample":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"privatekeysecretref":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"solvers":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"http01":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"nginx":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"hostyourdomain":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"yourdomain":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"will":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"environments":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"simple":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"}],"minimal":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"appsv1":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"}],"replicas":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"}],"selector":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"matchlabels":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"template":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"}],"containers":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"}],"containerport":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"}],"valuefrom":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"}],"secretkeyref":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"}],"llmsecrets":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"}],"openaiapikey":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"}],"including":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"visit":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"}],"openaigpt4o":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"explain":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"}],"compare":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"}],"contrast":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"}],"cases":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"}],"out":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"full":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"getting":[{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"started":[{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"learn":[{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"pull":[{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"}],"rm":[{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"}],"checkout":[{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"send":[{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"}],"openaigpt4omini":[{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"}],"ides":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"vscode":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"cursor":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"fully":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"openaicompatible":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"favorite":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"extensions":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"standard":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"sections":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"instructions":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"continue":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"dev":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"extension":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"follows":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"marketplace":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"visualstudio":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"continuedev":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"open":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"title":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"yourmodelname":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"apibase":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"apikey":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"yourapikey":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"restart":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"aifirst":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"editor":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"clicking":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"gear":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"icon":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"bottom":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"left":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"corner":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"shortcut":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"cmd":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"mac":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"ctrl":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"windowslinux":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"sidebar":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"select":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"ai":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"scroll":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"find":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"toggle":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"operations":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"other":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"aipowered":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"documentation":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"servers":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"community":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"main":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"link":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"nextlink":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"simplifying":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"process":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"sending":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"receiving":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"mixture":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"experts":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"under":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"mit":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"license":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"easily":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"tooluse":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"calling":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"supported":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"extend":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"realtime":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"ready":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"built":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"mind":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"configurable":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"lightweight":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"includes":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"libraries":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"runtime":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"resulting":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"smaller":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"size":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"binary":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"8mb":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"consumption":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"consume":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"lower":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"footprint":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"well":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"documented":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"guides":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"tested":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"extensively":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"unit":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"maintained":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"actively":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"developed":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"scalable":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"used":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"hpa":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"compliance":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"privacy":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"does":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"collect":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"analytics":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"ensuring":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"selfhosted":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"over":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"try":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"our":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"href":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"gettingstarted":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"guidelink":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"own":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"instance":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"minutes":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"acts":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"intermediary":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"between":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"standardizing":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"interactions":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"switch":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"without":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"changing":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"application":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"sophisticated":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"routing":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"fallback":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"mechanisms":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"centralize":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"policies":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"native":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"connected":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"calls":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"clientside":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"filesystems":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"databases":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"integrations":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"export":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"mcp_enabletrue":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"mcp_servers":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"filesystemserver":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"8081mcp":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"searchserver":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"8082mcp":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"recent":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"news":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"about":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"integrationlink":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"explore":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"middleware":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"types":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"filesystem":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"time":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"database":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"handling":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"considerations":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"inspector":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"included":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"tutorials":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"python":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"failed":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"network":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"connectivity":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"appearing":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"checks":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"mcpspecific":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"mcp_exposetrue":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"seamless":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"requiring":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"individually":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"modelcontextprotocol":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"securely":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"systems":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"query":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"manipulate":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"engines":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"retrieve":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"web":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"made":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"multiserver":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"dynamic":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"injection":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"injected":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"executed":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"transparently":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"zero":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"don":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"know":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"individual":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"observability":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"tb":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"assembly":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"processing":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"sends":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"added":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"decides":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"which":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"executes":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"via":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"delivery":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"timeserver":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"8083mcp":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"mcp_dial_timeout5s":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"mcp_servershttp":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"mcptimeserver":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"mcpsearchserver":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"values":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"mcp_enable":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"mcp_expose":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"mcp_client_timeout":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"10s":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"mcp_request_timeout":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"once":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"txt":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"collection":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"components":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"prebuilt":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"dashboards":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"robust":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"troubleshoot":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"integrate":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"logs":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"insights":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"telemetry":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"enable_telemetrytrue":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"scraped":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"count":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"duration":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"rates":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"utilization":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"cpu":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"memory":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"exposing":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"timeseries":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"storing":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"visualization":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"platform":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"scrape":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"coreos":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"interval":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"15s":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"namespaceselector":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"matchnames":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"visualize":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"volume":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"latency":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"imported":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"outputs":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"structured":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"analyzed":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"elasticsearch":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"loki":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"software":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"kits":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"sdks":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"sdk":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"programming":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"languages":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"simplify":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"typed":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"convenience":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"div":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"classname":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"grid":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"gridcols1":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"md":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"gridcols2":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"gap4":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"mb8":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"p4":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"roundedmd":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"h3pythonh3":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"stronginstallation":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"strong":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"pip":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"stronggithub":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"strongpypi":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"pypi":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"h3typescripth3":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"typescript":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"npm":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"inferencegatewaysdk":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"strongnpm":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"npmjs":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"h3goh3":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"h3rusth3":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"rust":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"cargo":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"strongcrate":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"crates":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"details":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"h3openaih3":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"paccess":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"gpt":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"tokenp":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"pstrongdefault":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"comv1p":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"h3deepseekh3":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"puse":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"comp":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"h3anthropich3":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"pconnect":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"claude":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"highquality":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"conversational":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"xheaderp":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"h3cohereh3":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"h3groqh3":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"highperformance":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"lpuaccelerated":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"comopenaiv1p":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"h3cloudflareh3":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"workers":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"comclientv4accountsp":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"textsm":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"account_id":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"aip":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"h3ollamah3":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"prun":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"none":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"8080v1p":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"h3googleh3":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"gemini":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"text":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"generation":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"understanding":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"generativelanguage":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"googleapis":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"requires":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"provider_api_url":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"provider_api_key":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"uppercase":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"offers":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"approaches":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"consistent":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"model_name":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"also":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"8080v1models":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"deepseekreasoner":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"capital":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"france":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"screenshots":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"webbased":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"interactively":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"exploring":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"userfriendly":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"experiment":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"view":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"writing":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"any":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"backend":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"ve":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"selected":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"choose":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"offered":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"filters":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"familiar":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"experience":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"input":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"area":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"markdown":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"formatting":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"reasoning":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"thought":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"statistics":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"clean":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"modern":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"light":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"dark":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"layout":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"responsive":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"desktop":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"mobile":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"devices":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"imageschatui":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"png":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"technologies":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"nextjs":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"react":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"component":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"tailwind":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"css":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"styling":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"deployed":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"standalone":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"container":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"could":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"orchestration":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"such":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"self":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"building":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"provided":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"approx":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"300mb":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"js":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"customize":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}]} \ No newline at end of file +{"10":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"15":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"16":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"17":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"20":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"30":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"40":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"47":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"50":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"60":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"70":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"80":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"123":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"200":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"333":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"400":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"401":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"408":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"429":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"500":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"502":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"503":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"504":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"1000":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"3003":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"3004":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"8080":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"8081":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"8082":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"8084":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"8085":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"1741879542":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"agenttoagent":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"a2a":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"integration":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"what":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"is":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"key":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"features":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"how":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"works":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"configuration":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"environment":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"variables":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"enable":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"protocol":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"support":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"expose":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"endpoints":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"for":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"debugging":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"and":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"agent":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"discovery":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"commaseparated":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"list":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"of":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"urls":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"note":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"replace":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"with":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"actual":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"in":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"production":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"example":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"multiple":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"agents":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"real":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"implementations":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"client":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"timeout":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"default":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"30s":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"docker":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"compose":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"available":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"google":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"calendar":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"calculator":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"weather":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"hello":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"world":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"}],"api":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"usage":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"examples":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"single":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"interaction":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"mathematical":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"calculations":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"information":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"management":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"implementation":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"multiagent":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"coordination":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"mixed":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"creating":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"custom":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"required":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"capabilities":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"schema":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"kubernetes":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"deployment":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"best":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"practices":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"design":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"security":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"performance":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"troubleshooting":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"common":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"issues":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"not":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"discovered":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"connection":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"timeouts":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"communication":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"failures":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"future":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"enhancements":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"related":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"resources":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"the":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"inference":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"gateway":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"now":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"supports":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"enabling":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"large":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"language":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"models":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"llms":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"to":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"seamlessly":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"coordinate":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"external":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"specialized":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"this":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"powerful":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"feature":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"allows":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"access":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"utilize":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"wide":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"range":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"tools":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"services":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"through":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"standardized":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"interfaces":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"that":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"enables":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"discover":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"communicate":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"simultaneously":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"each":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"can":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"provide":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"specific":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"called":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"skills":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"llm":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"automatically":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"use":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"fulfill":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"user":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"requests":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"automatic":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"discovers":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"their":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"conversation":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"provides":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"domains":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"distributed":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"architecture":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"run":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"as":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"separate":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"scale":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"independently":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"natural":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"users":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"interact":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"naturally":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"while":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"handles":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"standardization":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"based":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"on":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"interoperability":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"mermaid":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"init":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"theme":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"base":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"themevariables":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"primarycolor":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"326ce5":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"primarytextcolor":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"fff":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"linecolor":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"5d8aa8":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"secondarycolor":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"006100":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"flowchart":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"nodespacing":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"rankspacing":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"curve":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"linear":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"graph":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"td":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"request":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"discoverbravailable":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"registry":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"coordinatebrmultiple":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"bremail":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"brcalendar":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"brcalculator":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"brweather":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"results":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"unified":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"response":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"classdef":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"fill":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"9370db":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"stroke":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"strokewidth":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"2px":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"color":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"}],"white":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"32cd32":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"system":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"f5a800":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"black":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"class":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"when":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"makes":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"analysis":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"analyzes":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"are":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"task":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"decomposition":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"broken":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"down":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"into":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"tasks":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"or":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"sequentially":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"result":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"from":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"all":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"integrated":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"coherent":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"by":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"setting":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"these":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"bash":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"a2a_enabletrue":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"a2a_exposetrue":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"a2a_agentshttp":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"googlecalendaragent":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"http":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"yourcustomagent":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"shows":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"configure":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"helloworldagent":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"calculatoragent":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"weatheragent":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"your":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"only":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"currently":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"yaml":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"version":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"inferencegateway":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"image":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"ghcr":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"latest":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"ports":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"update":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"deepseek_api_key":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"depends_on":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"demonstration":[{"title":"Agent-To-Agent (A2A) Integration","excerpt":"The Inference Gateway now supports **Agent-To-Agent (A2A)** integration, enabling Large Language Models (LLMs) to seamlessly coordinate with external specialized agents. This powerful feature allows LLMs to access and utilize a wide range of external tools and services through standardized agent interfaces.","url":"/a2a"}],"reference":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"url":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"authentication":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"provider":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"chat":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"completions":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"}],"body":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"streaming":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"proxy":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"openai":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"completion":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"health":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"check":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"error":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"responses":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"advanced":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"tool":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"visionmultimodal":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"using":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"base64":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"}],"data":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"direct":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"openapi":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"specification":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"restful":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"interacting":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"various":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"providers":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"documents":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"formats":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"structures":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"relative":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"installation":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"localhost":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"running":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"locally":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"if":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"enabled":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"enable_authtrue":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"must":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"include":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"bearer":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"token":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"authorization":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"your_jwt_token":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"jwt":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"issued":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"configured":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"identity":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"idp":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"specified":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"openid":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"connect":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"settings":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"oidc_issuer_url":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"oidc_client_id":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"etc":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"validates":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"tokens":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"against":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"authenticate":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"get":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"across":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"v1models":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"status":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"ok":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"contenttype":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"applicationjson":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"object":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"id":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"gpt4":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"model":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"created":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"owned_by":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"served_by":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"claude3opus20240229":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"anthropic":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"meta":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"groq":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"v1modelsprovider":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"where":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"one":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"cohere":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"cloudflare":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"ollama":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"deepseek":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"gpt3":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"5turbo":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"post":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"json":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"ollamadeepseekr1":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"5b":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"name":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"messages":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"array":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"role":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"assistant":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"}],"message":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"sender":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"content":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"hi":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"you":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"doing":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"today":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"stream":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"false":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"optional":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"they":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"re":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"generated":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"max_tokens":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"maximum":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"number":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"generate":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"type":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"function":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"string":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"description":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"parameters":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"defining":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"chatcmpl753":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"deepseekr1":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"choices":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"index":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"thinknokay":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"so":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"greeted":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"me":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"said":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"just":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"starting":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"say":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"should":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"respond":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"friendly":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"way":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"nnmaybe":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"acknowledge":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"greeting":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"offer":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"my":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"help":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"something":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"since":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"mentioned":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"working":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"math":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"problems":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"solving":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"puzzles":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"ll":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"stick":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"nni":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"want":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"make":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"sure":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"approaching":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"it":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"right":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"question":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"yet":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"but":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"see":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"more":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"them":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"maybe":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"assistance":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"responding":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"an":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"emoji":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"like":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"would":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"be":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"nice":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"nthinknnhello":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"finish_reason":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"length":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"prompt_tokens":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"completion_tokens":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"total_tokens":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"true":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"streamed":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"server":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"sent":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"events":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"sse":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"https":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"developer":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"mozilla":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"objects":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"}],"curl":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"helpful":[{"title":"API Reference","excerpt":"Inference Gateway provides a RESTful API for interacting with language models from various providers.","url":"/api-reference"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"}],"overview":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"general":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"setup":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"document":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"highlevel":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"designed":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"modular":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"extensible":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"allowing":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"easy":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"new":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"fontfamily":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"arial":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"padding":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"nodes":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"clients":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"v1chatcompletions":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"auth":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"node":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"oidc":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"ig1":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"ig2":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"ig3":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"h1":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"h2":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"h3":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"define":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"styles":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"1px":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"apply":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"following":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"diagram":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"ingress":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"border":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"subgraph":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"cluster":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"ingressapi":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"service":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"internal":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"internalclients":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"internalagents":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"pod1":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"pod2":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"pod3":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"pods":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"pod":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"pg1":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"end":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"pg2":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"pg3":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"monitoring":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"stack":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"servicemonitor":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"metrics":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"sm":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"scrapes":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"prometheus":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"grafana":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"externalproviders":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"placed":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"inside":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"k8s":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"visually":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"ext1":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"ext2":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"ext3":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"ext4":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"ext5":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"ext6":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"ext7":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"ffffff":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"externalsvc":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"monitor":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"84a392":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"internalclient":[{"title":"Architecture Overview","excerpt":"This document provides a high-level overview of the architecture of the Inference Gateway. The Inference-Gateway is designed to be modular and extensible, allowing easy integration of new models and providers.","url":"/architecture-overview"}],"flow":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"keycloak":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"prerequisites":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"up":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"option":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"manual":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"configmap":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"secret":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"helm":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"obtaining":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"password":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"grant":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"testing":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"credentials":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"servicetoservice":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"making":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"authenticated":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"selfsigned":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"certificates":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"create":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"keycloaks":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"ca":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"certificate":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"mount":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"next":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"steps":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"secure":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"guide":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"focuses":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"popular":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"opensource":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"solution":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"valid":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"header":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"validated":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"applications":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"processed":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"otherwise":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"unauthorized":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"returned":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"set":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"oidc_issuer_urlhttps":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"section":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"detailed":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"integrating":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"www":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"org":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"v24":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"later":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"recommended":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"github":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"v0":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"kubectl":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"iodocstaskstools":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"shdocsintroinstall":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"chart":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"taskfile":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"devinstallation":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"complete":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"clone":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"repository":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"git":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"cd":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"deploy":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"}],"infrastructure":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"postgresql":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"deployinfrastructure":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"admin":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"console":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"local":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"username":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"tempadmin":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"output":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"previous":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"command":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"test":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"fetchaccesstoken":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"localv1models":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"manually":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"follow":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"install":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"official":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"realm":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"log":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"click":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"enter":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"go":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"save":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"page":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"confidential":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"account":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"changes":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"tab":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"copy":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"add":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"email":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"userexample":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"com":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"verified":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"temporary":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"off":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"apiversion":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"v1":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"kind":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"metadata":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"namespace":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"enable_auth":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"stringdata":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"oidc_client_secret":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"yourclientsecret":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"}],"opaque":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"upgrade":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"createnamespace":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"envfrom":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"oci":[{"title":"Authentication","excerpt":"Inference Gateway supports authentication through OpenID Connect (OIDC), allowing you to secure your API with various identity providers. This guide focuses on setting up authentication with Keycloak, a popular open-source identity and access management solution.","url":"/authentication"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"cli":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"script":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"directory":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"}],"download":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"build":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"source":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"quick":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"start":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"initialize":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"interactive":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"core":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"commands":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"essential":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"infer":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"prompt":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"enabledisable":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"execution":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"manage":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"whitelist":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"safety":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"sandbox":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"file":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"structure":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"search":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"development":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"storage":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"backends":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"interface":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"shortcuts":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"userdefined":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"format":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"whitelisting":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"allowed":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"remove":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"protected":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"paths":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"approval":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"prompts":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"shell":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"aliases":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"bashrc":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"zshrc":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"workflow":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"project":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"agentdriven":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"problem":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"cicd":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"automated":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"code":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"review":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"verify":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"debug":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"mode":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"permission":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"reset":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"reinitialize":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"validate":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"logging":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"gobased":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"commandline":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"comprehensive":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"autonomous":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"extensive":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"current":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"breaking":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"expected":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"until":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"stable":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"fssl":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"raw":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"githubusercontent":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"sh":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"installdir":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"home":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"localbin":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"have":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"installed":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"binaries":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"releases":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"signed":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"cosign":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"verification":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"tui":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"imagestui":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"gif":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"creates":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"inferconfig":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"configurations":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"resource":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"launch":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"scrolling":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"keyboard":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"navigation":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"selection":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"expansion":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"execute":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"complex":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"background":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"analyze":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"codebase":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"suggest":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"improvements":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"fix":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"failing":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"tests":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"suite":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"implement":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"issue":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"operates":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"autonomously":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"planning":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"validation":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"phases":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"config":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"setmodel":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"openaigpt4":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"setsystem":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"coding":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"disable":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"exec":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"require":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"protectedpath":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"uses":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"2layer":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"precedence":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"infer_":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"prefix":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"highest":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"priority":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"line":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"flags":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"builtin":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"defaults":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"lowest":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"api_key":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"retry":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"max_attempts":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"initial_backoff_sec":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"max_backoff_sec":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"backoff_multiplier":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"safe":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"readonly":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"directories":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"tmp":[{"title":"Inference Gateway CLI","excerpt":"The Inference Gateway CLI (`infer`) is a powerful Go-based command-line tool that provides comprehensive access to the Inference Gateway. It features interactive chat, autonomous agent capabilities, extensive tool integration, and advanced conversation management.","url":"/cli"}],"methods":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"context":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"mcp":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"ui":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"variable":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"env":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"configmaps":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"}],"secrets":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"}],"import":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"configtable":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"flexible":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"options":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"adapt":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"needs":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"facilitate":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"apis":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"proper":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"optimal":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"suit":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"different":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"scenarios":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"}],"most":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"deployments":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"kubernetesbased":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"files":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"primary":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"method":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"configuring":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"control":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"everything":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"basic":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"providerspecific":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"rows":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"defaultvalue":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"enable_vision":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"enable_telemetry":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"opentelemetry":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"tracing":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"send":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"}],"images":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"alongside":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"text":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"disabled":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"will":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"rejected":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"even":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"vision":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"reasons":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"exposes":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"endpoint":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"scraping":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"generates":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"traces":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"collected":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"collectors":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"issuer":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"behavior":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"server_host":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"host":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"server_port":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"port":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"server_read_timeout":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"read":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"server_write_timeout":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"write":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"server_idle_timeout":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"idle":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"120s":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"server_tls_cert_path":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"tls":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"path":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"server_tls_key_path":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"strongly":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"pem":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"connects":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"thirdparty":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"client_timeout":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"connections":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"per":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"minimum":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"tls12":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"highthroughput":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"consider":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"increasing":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"pool":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"at":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"plan":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"openai_api_url":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"comv1":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"openai_api_key":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"}],"anthropic_api_url":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"anthropic_api_key":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"cohere_api_url":[{"title":"Configuration","excerpt":"import ConfigTable from '../components/ConfigTable';","url":"/configuration"}],"doesnt":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"exist":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"keys":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"creation":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"certmanager":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"horizontal":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"autoscaling":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"accessing":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"upgrading":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"uninstalling":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"checking":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"sources":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"explains":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"charts":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"v3":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"there":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"two":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"deploys":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"both":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"directly":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"enabledtrue":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"before":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"deploying":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"may":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"need":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"doesn":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"dryrunclient":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"eof":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"annotations":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"shreleasename":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"inferencegatewayui":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"shreleasenamespace":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"labels":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"}],"app":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"iomanagedby":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"yourdeepseekapikey":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"classnamenginx":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"hosts":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"hostui":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"pathtypeprefix":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"certpathtotls":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"crt":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"keypathtotls":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"we":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"recommend":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"io":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"automate":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"approach":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"automates":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"issuance":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"renewal":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"already":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"repo":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"jetstack":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"jetstackcertmanager":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"crds":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"clusterissuer":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"let":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"encrypt":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"iov1":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"letsencryptprod":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"spec":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"},{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"acme":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"acmev02":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"letsencrypt":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"orgdirectory":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"youremailexample":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"privatekeysecretref":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"solvers":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"http01":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"nginx":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"hostyourdomain":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"yourdomain":[{"title":"Deployment with Helm","excerpt":"This guide explains how to deploy the Inference Gateway using the official Helm charts.","url":"/deployment"}],"processing":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"environments":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"simple":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"}],"minimal":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"appsv1":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"}],"replicas":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"}],"selector":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"matchlabels":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"template":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"}],"containers":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"}],"containerport":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"}],"valuefrom":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"}],"secretkeyref":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"}],"llmsecrets":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"}],"openaiapikey":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"}],"including":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"visit":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"}],"openaigpt4o":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"explain":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"}],"compare":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"}],"contrast":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"}],"process":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"visioncapable":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"}],"first":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"}],"enable_visiontrue":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"image_url":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"upload":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"}],"wikimedia":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"}],"jpg":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"pixel":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"}],"imagepng":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"}],"supported":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"claude":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"sonnet":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"haiku":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"googlegemini2":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"}],"5flash":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"}],"ollamallava":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"}],"cases":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"}],"out":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"full":[{"title":"Examples","excerpt":"This page provides examples of how to use Inference Gateway in various scenarios and environments.","url":"/examples"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"getting":[{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"started":[{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"learn":[{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"pull":[{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"}],"rm":[{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"}],"checkout":[{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"openaigpt4omini":[{"title":"Getting Started","excerpt":"Learn how to install and set up Inference Gateway.","url":"/getting-started"}],"ides":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"vscode":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"cursor":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"fully":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"openaicompatible":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"favorite":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"extensions":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"standard":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"sections":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"instructions":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"continue":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"dev":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"extension":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"follows":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"marketplace":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"visualstudio":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"continuedev":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"open":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"title":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"yourmodelname":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"apibase":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"8080v1":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"apikey":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"yourapikey":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"restart":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"aifirst":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"editor":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"clicking":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"gear":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"icon":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"bottom":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"left":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"corner":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"shortcut":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"cmd":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"mac":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"ctrl":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"windowslinux":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"sidebar":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"select":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"ai":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"scroll":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"find":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"toggle":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"operations":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"other":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"aipowered":[{"title":"Integrated Development Environments (IDEs)","excerpt":"Inference Gateway is fully OpenAI-compatible, allowing you to configure it with your favorite IDEs and extensions using standard OpenAI integration settings. The following sections provide instructions for configuring popular IDEs and extensions.","url":"/ides"}],"documentation":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"servers":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"community":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"main":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"link":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"nextlink":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"simplifying":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"sending":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"receiving":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"mixture":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"experts":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"under":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"mit":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"license":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"easily":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"tooluse":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"calling":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"extend":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"realtime":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"ready":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"built":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"mind":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"configurable":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"lightweight":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"includes":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"libraries":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"runtime":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"resulting":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"smaller":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"size":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"binary":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"8mb":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"consumption":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"consume":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"lower":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"footprint":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"well":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"documented":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"guides":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"tested":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"extensively":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"unit":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"maintained":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"actively":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"developed":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"scalable":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"used":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"hpa":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"compliance":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"privacy":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"does":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"collect":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"analytics":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"ensuring":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"selfhosted":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"over":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"try":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"our":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"href":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"gettingstarted":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"guidelink":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"own":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"instance":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"minutes":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"acts":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"intermediary":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"between":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"standardizing":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"interactions":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"switch":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"without":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"changing":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"application":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"sophisticated":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"routing":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"fallback":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"mechanisms":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"centralize":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"policies":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"native":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"connected":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"calls":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"clientside":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"filesystems":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"databases":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"integrations":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"export":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"mcp_enabletrue":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"mcp_servers":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"filesystemserver":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"8081mcp":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"searchserver":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"8082mcp":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"recent":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"news":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"about":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"},{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"integrationlink":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"explore":[{"title":"Inference Gateway Documentation","excerpt":"Inference Gateway is a proxy server designed to facilitate access to various","url":"/"}],"middleware":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"types":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"filesystem":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"time":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"database":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"handling":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"considerations":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"inspector":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"included":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"tutorials":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"python":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"failed":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"network":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"connectivity":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"appearing":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"checks":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"mcpspecific":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"mcp_exposetrue":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"seamless":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"requiring":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"individually":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"modelcontextprotocol":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"securely":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"systems":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"query":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"manipulate":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"engines":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"retrieve":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"web":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"made":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"multiserver":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"dynamic":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"injection":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"injected":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"executed":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"transparently":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"zero":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"don":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"know":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"individual":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"observability":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"tb":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"assembly":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"sends":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"added":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"decides":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"which":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"executes":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"via":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"delivery":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"timeserver":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"8083mcp":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"mcp_dial_timeout5s":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"mcp_servershttp":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"mcptimeserver":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"mcpsearchserver":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"groq_api_key":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"values":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"mcp_enable":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"mcp_expose":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"mcp_client_timeout":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"10s":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"mcp_request_timeout":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"once":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"txt":[{"title":"Model Context Protocol (MCP) Integration","excerpt":"The Inference Gateway supports **Model Context Protocol (MCP)** integration, enabling seamless access to external tools and data sources for Large Language Models (LLMs). This powerful feature automatically discovers and provides tools to LLMs without requiring clients to manage them individually.","url":"/mcp"}],"collection":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"components":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"prebuilt":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"dashboards":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"robust":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"troubleshoot":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"integrate":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"logs":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"insights":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"telemetry":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"enable_telemetrytrue":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"scraped":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"count":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"duration":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"rates":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"utilization":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"cpu":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"memory":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"exposing":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"timeseries":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"storing":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"visualization":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"platform":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"scrape":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"coreos":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"interval":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"15s":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"namespaceselector":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"matchnames":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"visualize":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"volume":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"latency":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"imported":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"outputs":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"structured":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"analyzed":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"elasticsearch":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"loki":[{"title":"Observability","excerpt":"Inference Gateway provides robust observability features to help monitor and troubleshoot your deployment. These features include metrics collection, tracing, and logging capabilities that integrate with popular monitoring tools.","url":"/observability"}],"software":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"kits":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"sdks":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"sdk":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"programming":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"languages":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"simplify":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"typed":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"convenience":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"div":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"classname":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"grid":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"gridcols1":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"md":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"gridcols2":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"gap4":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"mb8":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"p4":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"roundedmd":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"h3pythonh3":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"stronginstallation":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"strong":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"pip":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"stronggithub":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"strongpypi":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"pypi":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"h3typescripth3":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"typescript":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"},{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"npm":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"inferencegatewaysdk":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"strongnpm":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"npmjs":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"h3goh3":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"h3rusth3":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"rust":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"cargo":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"strongcrate":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"crates":[{"title":"Software Development Kits (SDK's)","excerpt":"Inference Gateway provides official SDKs for various programming languages to simplify integration with your applications. These SDKs offer typed interfaces, error handling, and convenience methods for interacting with the Inference Gateway API.","url":"/sdks"}],"details":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"h3openaih3":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"paccess":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"gpt":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"tokendiv":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"divstrongdefault":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"comv1div":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"divstrongvision":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"yes":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"gpt4o":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"gpt5":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"turbo":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"h3deepseekh3":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"puse":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"strongauthentication":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"strongdefault":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"strongvision":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"no":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"h3anthropich3":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"pconnect":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"highquality":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"conversational":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"xheader":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"h3cohereh3":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"aya":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"h3groqh3":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"highperformance":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"lpuaccelerated":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"comopenaiv1":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"h3cloudflareh3":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"workers":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"comclientv4accounts":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"textsm":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"account_id":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"aidiv":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"h3ollamah3":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"prun":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"none":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"llava":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"llama":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"h3googleh3":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"gemini":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"generation":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"understanding":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"generativelanguage":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"googleapis":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"several":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"containing":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"mistral":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"pixtral":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"comimage":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"requires":[{"title":"Supported Providers","excerpt":"Inference Gateway provides a unified interface to interact with multiple LLM providers.","url":"/supported-providers"}],"screenshots":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"webbased":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"interactively":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"exploring":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"userfriendly":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"experiment":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"view":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"writing":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"any":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"backend":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"ve":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"selected":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"choose":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"offered":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"filters":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"familiar":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"experience":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"input":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"area":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"markdown":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"formatting":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"reasoning":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"thought":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"statistics":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"clean":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"modern":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"light":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"dark":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"layout":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"responsive":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"desktop":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"mobile":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"devices":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"imageschatui":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"png":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"technologies":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"nextjs":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"react":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"component":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"tailwind":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"css":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"styling":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"deployed":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"standalone":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"container":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"could":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"orchestration":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"such":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"self":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"building":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"provided":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"approx":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"300mb":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"js":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}],"customize":[{"title":"User Interface (UI)","excerpt":"Inference Gateway includes an optional web-based user interface for interactively testing and exploring the various language model providers and capabilities.","url":"/ui"}]} \ No newline at end of file diff --git a/public/sitemap.xml b/public/sitemap.xml index 24c9245..de61d90 100644 --- a/public/sitemap.xml +++ b/public/sitemap.xml @@ -20,7 +20,7 @@ https://docs.inference-gateway.com/api-reference - 2025-08-28T06:34:23.708Z + 2025-11-15T19:55:03.391Z monthly 0.8 @@ -44,7 +44,7 @@ https://docs.inference-gateway.com/configuration - 2025-08-28T06:07:34.671Z + 2025-11-15T19:53:28.784Z monthly 0.8 @@ -56,7 +56,7 @@ https://docs.inference-gateway.com/examples - 2025-08-28T06:07:34.672Z + 2025-11-15T19:55:28.657Z monthly 0.8 @@ -92,7 +92,7 @@ https://docs.inference-gateway.com/supported-providers - 2025-08-28T06:07:34.673Z + 2025-11-15T21:12:52.321Z monthly 0.8