diff --git a/src/agent/doc/index.rst b/src/agent/doc/index.rst index 103987c12..2178e47a1 100644 --- a/src/agent/doc/index.rst +++ b/src/agent/doc/index.rst @@ -49,7 +49,8 @@ array of options:: The structure of the input message bag is flexible, see `Platform Component`_ for more details on how to use it. -**Options** +Options +~~~~~~~ As with the Platform component, you can pass options to the agent when running it. These options configure the agent's behavior, for example available tools to execute, or are forwarded to the underlying platform and model. @@ -88,12 +89,14 @@ Custom tools can basically be any class, but must configure by the ``#[AsTool]`` } } -**Tool Return Value** +Tool Return Value +~~~~~~~~~~~~~~~~~ In the end, the tool's result needs to be a string, but Symfony AI converts arrays and objects, that implement the JsonSerializable interface, to JSON strings for you. So you can return arrays or objects directly from your tool. -**Tool Methods** +Tool Methods +~~~~~~~~~~~~ You can configure the method to be called by the LLM with the #[AsTool] attribute and have multiple tools per class:: @@ -122,13 +125,15 @@ You can configure the method to be called by the LLM with the #[AsTool] attribut } } -**Tool Parameters** +Tool Parameters +~~~~~~~~~~~~~~~ Symfony AI generates a JSON Schema representation for all tools in the Toolbox based on the #[AsTool] attribute and method arguments and param comments in the doc block. Additionally, JSON Schema support validation rules, which are partially support by LLMs like GPT. -**Parameter Validation with #[With] Attribute** +Parameter Validation with #[With] Attribute +........................................... To leverage JSON Schema validation rules, configure the ``#[With]`` attribute on the method arguments of your tool:: @@ -157,7 +162,8 @@ To leverage JSON Schema validation rules, configure the ``#[With]`` attribute on See attribute class ``Symfony\AI\Platform\Contract\JsonSchema\Attribute\With`` for all available options. -**Automatic Enum Validation** +Automatic Enum Validation +......................... For PHP backed enums, Symfony AI provides automatic validation without requiring any ``#[With]`` attributes:: @@ -201,7 +207,8 @@ This eliminates the need for manual ``#[With(enum: [...])]`` attributes when usi Please be aware, that this is only converted in a JSON Schema for the LLM to respect, but not validated by Symfony AI. -**Third-Party Tools** +Third-Party Tools +~~~~~~~~~~~~~~~~~ In some cases you might want to use third-party tools, which are not part of your application. Adding the ``#[AsTool]`` attribute to the class is not possible in those cases, but you can explicitly register the tool in the MemoryFactory:: @@ -235,7 +242,8 @@ tools in the same chain - which even enables you to overwrite the pre-existing c The order of the factories in the ChainFactory matters, as the first factory has the highest priority. -**Agent uses Agent 🤯** +Agent uses Agent 🤯 +~~~~~~~~~~~~~~~~~~ Similar to third-party tools, an agent can also use an different agent as a tool. This can be useful to encapsulate complex logic or to reuse an agent in multiple places or hide sub-agents from the LLM:: @@ -251,7 +259,8 @@ complex logic or to reuse an agent in multiple places or hide sub-agents from th ->addTool($agentTool, 'research_agent', 'Meaningful description for sub-agent'); $toolbox = new Toolbox($metadataFactory, [$agentTool]); -**Fault Tolerance** +Fault Tolerance +~~~~~~~~~~~~~~~ To gracefully handle errors that occur during tool calling, e.g. wrong tool names or runtime errors, you can use the ``FaultTolerantToolbox`` as a decorator for the Toolbox. It will catch the exceptions and return readable error messages @@ -299,14 +308,16 @@ If you want to expose the underlying error to the LLM, you can throw a custom ex } } -**Tool Filtering** +Tool Filtering +~~~~~~~~~~~~~~ To limit the tools provided to the LLM in a specific agent call to a subset of the configured tools, you can use the tools option with a list of tool names:: $this->agent->call($messages, ['tools' => ['tavily_search']]); -**Tool Result Interception** +Tool Result Interception +~~~~~~~~~~~~~~~~~~~~~~~~ To react to the result of a tool, you can implement an EventListener or EventSubscriber, that listens to the ``ToolCallsExecuted`` event. This event is dispatched after the Toolbox executed all current tool calls and enables you @@ -320,7 +331,8 @@ to skip the next LLM call by setting a result yourself:: } }); -**Keeping Tool Messages** +Keeping Tool Messages +~~~~~~~~~~~~~~~~~~~~~ Sometimes you might wish to keep the tool messages (AssistantMessage containing the toolCalls and ToolCallMessage containing the result) in the context. Enable the keepToolMessages flag of the toolbox' AgentProcessor to ensure those @@ -347,7 +359,8 @@ messages will be added to your MessageBag:: $result = $agent->call($messages); // $messages will now include the tool messages -**Code Examples (with built-in tools)** +Code Examples (with built-in tools) +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ * `Brave Tool`_ * `Clock Tool`_ @@ -391,7 +404,8 @@ more accurate and context-aware results. Therefore, the component provides a bui ); $result = $agent->call($messages); -**Code Examples** +Code Examples +~~~~~~~~~~~~~ * `RAG with MongoDB`_ * `RAG with Pinecone`_ @@ -400,9 +414,10 @@ Structured Output ----------------- A typical use-case of LLMs is to classify and extract data from unstructured sources, which is supported by some models -by features like **Structured Output** or providing a **Response Format**. +by features like Structured Output or providing a Response Format. -**PHP Classes as Output** +PHP Classes as Output +~~~~~~~~~~~~~~~~~~~~~ Symfony AI supports that use-case by abstracting the hustle of defining and providing schemas to the LLM and converting the result back to PHP objects. @@ -433,7 +448,8 @@ To achieve this, a specific agent processor needs to be registered:: dump($result->getContent()); // returns an instance of `MathReasoning` class -**Array Structures as Output** +Array Structures as Output +~~~~~~~~~~~~~~~~~~~~~~~~~~ Also PHP array structures as response_format are supported, which also requires the agent processor mentioned above:: @@ -462,7 +478,8 @@ Also PHP array structures as response_format are supported, which also requires dump($result->getContent()); // returns an array -**Code Examples** +Code Examples +~~~~~~~~~~~~~ * `Structured Output with PHP class`_ * `Structured Output with array`_ @@ -479,7 +496,8 @@ They are provided while instantiating the agent instance:: $agent = new Agent($platform, $model, $inputProcessors, $outputProcessors); -**InputProcessor** +InputProcessor +~~~~~~~~~~~~~~ InputProcessor instances are called in the agent before handing over the MessageBag and the $options array to the LLM and are able to mutate both on top of the Input instance provided:: @@ -502,7 +520,8 @@ and are able to mutate both on top of the Input instance provided:: } } -**OutputProcessor** +OutputProcessor +~~~~~~~~~~~~~~~ OutputProcessor instances are called after the model provided a result and can - on top of options and messages - mutate or replace the given result:: @@ -521,7 +540,8 @@ or replace the given result:: } } -**Agent Awareness** +Agent Awareness +~~~~~~~~~~~~~~~ Both, Input and Output instances, provide access to the LLM used by the agent, but the agent itself is only provided, in case the processor implemented the AgentAwareInterface interface, which can be combined with using the @@ -551,7 +571,7 @@ relevant information from different sources. Memory providers inject information model with context without changing your application logic. Using Memory -~~~~~~~~~~~~ +^^^^^^^^^^^^ Memory integration is handled through the ``MemoryInputProcessor`` and one or more ``MemoryProviderInterface`` implementations:: @@ -575,11 +595,12 @@ Memory integration is handled through the ``MemoryInputProcessor`` and one or mo $result = $agent->call($messages); Memory Providers -~~~~~~~~~~~~~~~~ +^^^^^^^^^^^^^^^^ The library includes several memory provider implementations that are ready to use out of the box. -**Static Memory** +Static Memory +............. Static memory provides fixed information to the agent, such as user preferences, application context, or any other information that should be consistently available without being directly added to the system prompt:: @@ -591,7 +612,8 @@ information that should be consistently available without being directly added t 'The user prefers brief explanations', ); -**Embedding Provider** +Embedding Provider +.................. This provider leverages vector storage to inject relevant knowledge based on the user's current message. It can be used for retrieving general knowledge from a store or recalling past conversation pieces that might be relevant:: @@ -605,7 +627,7 @@ for retrieving general knowledge from a store or recalling past conversation pie ); Dynamic Memory Control -~~~~~~~~~~~~~~~~~~~~~~ +^^^^^^^^^^^^^^^^^^^^^^ Memory is globally configured for the agent, but you can selectively disable it for specific calls when needed. This is useful when certain interactions shouldn't be influenced by the memory context:: @@ -619,7 +641,7 @@ Testing ------- MockAgent -~~~~~~~~~ +^^^^^^^^^ For testing purposes, the Agent component provides a ``MockAgent`` class that behaves like Symfony's ``MockHttpClient``. It provides predictable responses without making external API calls and includes assertion methods for verifying interactions:: @@ -652,7 +674,7 @@ Call Tracking and Assertions:: $agent->reset(); MockResponse Objects -~~~~~~~~~~~~~~~~~~~~ +^^^^^^^^^^^^^^^^^^^^ Similar to ``MockHttpClient``, you can use ``MockResponse`` objects for more complex scenarios:: @@ -665,7 +687,7 @@ Similar to ``MockHttpClient``, you can use ``MockResponse`` objects for more com ]); Callable Responses -~~~~~~~~~~~~~~~~~~ +^^^^^^^^^^^^^^^^^^ Like ``MockHttpClient``, ``MockAgent`` supports callable responses for dynamic behavior:: @@ -684,7 +706,7 @@ Like ``MockHttpClient``, ``MockAgent`` supports callable responses for dynamic b Service Testing Example -~~~~~~~~~~~~~~~~~~~~~~~ +^^^^^^^^^^^^^^^^^^^^^^^ Testing a service that uses an agent:: @@ -708,7 +730,8 @@ Testing a service that uses an agent:: The ``MockAgent`` provides all the benefits of traditional mocks while offering a more intuitive API for AI agent testing, making your tests more reliable and easier to maintain. -**Code Examples** +Code Examples +~~~~~~~~~~~~~ * `Chat with static memory`_ * `Chat with embedding search memory`_ diff --git a/src/ai-bundle/doc/index.rst b/src/ai-bundle/doc/index.rst index 085a46dbc..ad6c5ce8c 100644 --- a/src/ai-bundle/doc/index.rst +++ b/src/ai-bundle/doc/index.rst @@ -19,7 +19,8 @@ Installation Configuration ------------- -**Simple Example with OpenAI** +Basic Example with OpenAI +~~~~~~~~~~~~~~~~~~~~~~~~~ .. code-block:: yaml @@ -32,7 +33,8 @@ Configuration default: model: 'gpt-4o-mini' -**Advanced Example with Anthropic, Azure, ElevenLabs, Gemini, Perplexity, Vertex AI, Ollama multiple agents** +Advanced Example with multiple agents +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. code-block:: yaml @@ -259,7 +261,8 @@ For basic usage, specify the system prompt as a simple string: model: 'gpt-4o-mini' prompt: 'You are a helpful assistant.' -**Advanced Configuration** +Advanced Configuration +~~~~~~~~~~~~~~~~~~~~~~ For more control, such as including tool definitions in the system prompt, use the array format: @@ -285,7 +288,8 @@ The array format supports these options: You cannot use both ``text`` and ``file`` simultaneously. Choose one option based on your needs. -**File-Based Prompts** +File-Based Prompts +~~~~~~~~~~~~~~~~~~ For better organization and reusability, you can store system prompts in external files. This is particularly useful for: @@ -307,7 +311,10 @@ Configure the prompt with a file path: The file can be in any text format (.txt, .json, .md, etc.). The entire content of the file will be used as the system prompt text. -**Example Text File** (``prompts/assistant.txt``): +Example Text File +................. + +``prompts/assistant.txt``: .. code-block:: text @@ -318,7 +325,10 @@ The file can be in any text format (.txt, .json, .md, etc.). The entire content - Provide examples when appropriate - Be respectful and professional at all times -**Example JSON File** (``prompts/code-reviewer.json``): +Example JSON File +................. + +``prompts/code-reviewer.json``: .. code-block:: json @@ -331,7 +341,8 @@ The file can be in any text format (.txt, .json, .md, etc.). The entire content "tone": "constructive and educational" } -**Translation Support** +Translation Support +~~~~~~~~~~~~~~~~~~~ To use translated system prompts, you need to have the Symfony Translation component installed: @@ -357,10 +368,11 @@ The system prompt text will be automatically translated using the configured tra Memory Provider Configuration ----------------------------- -Memory providers allow agents to access and utilize conversation history and context from previous interactions. +Memory providers allow agents to access and utilize conversation history and context from previous interactions. This enables agents to maintain context across conversations and provide more personalized responses. -**Static Memory (Simple)** +Static Memory (Simple) +~~~~~~~~~~~~~~~~~~~~~~ The simplest way to add memory is to provide a string that will be used as static context: @@ -376,7 +388,8 @@ The simplest way to add memory is to provide a string that will be used as stati This static memory content is consistently available to the agent across all conversations. -**Dynamic Memory (Advanced)** +Dynamic Memory (Advanced) +~~~~~~~~~~~~~~~~~~~~~~~~~ For more sophisticated scenarios, you can reference an existing service that implements dynamic memory. Use the array syntax with a ``service`` key to explicitly reference a service: @@ -392,7 +405,8 @@ Use the array syntax with a ``service`` key to explicitly reference a service: prompt: text: 'You are a helpful assistant.' -**Memory as System Prompt** +Memory as System Prompt +~~~~~~~~~~~~~~~~~~~~~~~ Memory can work independently or alongside the system prompt: @@ -415,7 +429,8 @@ Memory can work independently or alongside the system prompt: prompt: text: 'You are a helpful assistant.' -**Custom Memory Provider Requirements** +Custom Memory Provider Requirements +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ When using a service reference, the memory service must implement the ``Symfony\AI\Agent\Memory\MemoryProviderInterface``:: @@ -435,17 +450,23 @@ When using a service reference, the memory service must implement the ``Symfony\ } } -**How Memory Works** +How Memory Works +~~~~~~~~~~~~~~~~ The system uses explicit configuration to determine memory behavior: -**Static Memory Processing:** +Static Memory Processing +........................ + + 1. When you provide a string value (e.g., ``memory: 'some text'``) 2. The system creates a ``StaticMemoryProvider`` automatically 3. Content is formatted as "## Static Memory" with the provided text 4. This memory is consistently available across all conversations -**Dynamic Memory Processing:** +Dynamic Memory Processing +......................... + 1. When you provide an array with a service key (e.g., ``memory: {service: 'my_service'}``) 2. The ``MemoryInputProcessor`` uses the specified service directly 3. The service's ``loadMemory()`` method is called before processing user input @@ -458,7 +479,8 @@ Multi-Agent Orchestration The AI Bundle provides a configuration system for creating multi-agent orchestrators that route requests to specialized agents based on defined handoff rules. -**Multi-Agent vs Agent-as-Tool** +Multi-Agent vs Agent-as-Tool +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The AI Bundle supports two different approaches for combining multiple agents: @@ -480,14 +502,16 @@ The AI Bundle supports two different approaches for combining multiple agents: Example: A customer service system that routes to technical support, billing, or general inquiries based on the user's question. -**Key Differences** +Key Differences +^^^^^^^^^^^^^^^ * **Control Flow**: Agent-as-tool maintains control in the primary agent; Multi-Agent delegates full control to the selected agent * **Decision Making**: Agent-as-tool decides during processing; Multi-Agent decides before processing * **Response Generation**: Agent-as-tool integrates tool responses; Multi-Agent returns the selected agent's complete response * **Use Case**: Agent-as-tool for augmentation; Multi-Agent for specialization -**Configuration** +Configuration +^^^^^^^^^^^^^ .. code-block:: yaml @@ -538,7 +562,8 @@ For the example above, the service ``ai.multi_agent.support`` is registered and } } -**Handoff Rules and Fallback** +Handoff Rules and Fallback +^^^^^^^^^^^^^^^^^^^^^^^^^^ Handoff rules are defined as a key-value mapping where: @@ -561,7 +586,8 @@ Example of creating a Handoff in PHP:: The ``fallback`` parameter (required) specifies an agent to handle requests that don't match any handoff rules. This ensures all requests have a proper handler. -**How It Works** +How It Works +^^^^^^^^^^^^ 1. The orchestrator agent receives the initial request 2. It analyzes the request content and matches it against handoff rules @@ -569,7 +595,8 @@ The ``fallback`` parameter (required) specifies an agent to handle requests that 4. If no specific conditions match, the request is delegated to the fallback agent 5. The selected agent processes the request and returns the response -**Example: Customer Service Bot** +Example: Customer Service Bot +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. code-block:: yaml @@ -586,7 +613,8 @@ The ``fallback`` parameter (required) specifies an agent to handle requests that Usage ----- -**Agent Service** +Agent Service +~~~~~~~~~~~~~ Use the `Agent` service to leverage models and tools:: @@ -612,7 +640,8 @@ Use the `Agent` service to leverage models and tools:: } } -**Register Processors** +Register Processors +~~~~~~~~~~~~~~~~~~~ By default, all services implementing the ``InputProcessorInterface`` or the ``OutputProcessorInterface`` interfaces are automatically applied to every ``Agent``. @@ -640,7 +669,8 @@ the ``#[AsOutputProcessor]`` attributes:: } } -**Register Tools** +Register Tools +~~~~~~~~~~~~~~ To use existing tools, you can register them as a service: @@ -760,14 +790,16 @@ The token usage information can be accessed from the result metadata:: } } -**Supported Platforms** +Supported Platforms +~~~~~~~~~~~~~~~~~~~ Token usage tracking is currently supported, and by default enabled, for the following platforms: * **OpenAI**: Tracks all token types including cached and thinking tokens * **Mistral**: Tracks basic token usage and rate limit information -**Disable Tracking** +Disable Tracking +~~~~~~~~~~~~~~~~ To disable token usage tracking for an agent, set the ``track_token_usage`` option to ``false``: @@ -785,7 +817,8 @@ Vectorizers Vectorizers are components that convert text documents into vector embeddings for storage and retrieval. They can be configured once and reused across multiple indexers, providing better maintainability and consistency. -**Configuring Vectorizers** +Configuring Vectorizers +~~~~~~~~~~~~~~~~~~~~~~~ Vectorizers are defined in the ``vectorizer`` section of your configuration: @@ -808,7 +841,8 @@ Vectorizers are defined in the ``vectorizer`` section of your configuration: platform: 'ai.platform.mistral' model: 'mistral-embed' -**Using Vectorizers in Indexers** +Using Vectorizers in Indexers +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Once configured, vectorizers can be referenced by name in indexer configurations: @@ -828,7 +862,8 @@ Once configured, vectorizers can be referenced by name in indexer configurations vectorizer: 'ai.vectorizer.mistral_embed' store: 'ai.store.memory.kb' -**Benefits of Configured Vectorizers** +Benefits of Configured Vectorizers +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ * **Reusability**: Define once, use in multiple indexers * **Consistency**: Ensure all indexers using the same vectorizer have identical embedding configuration diff --git a/src/mcp-bundle/doc/index.rst b/src/mcp-bundle/doc/index.rst index 3f499c754..5c641a532 100644 --- a/src/mcp-bundle/doc/index.rst +++ b/src/mcp-bundle/doc/index.rst @@ -3,7 +3,7 @@ MCP Bundle Symfony integration bundle for `Model Context Protocol`_ using the official MCP SDK `mcp/sdk`_. -**Supports MCP capabilities (tools, prompts, resources) as server via HTTP transport and STDIO. Resource templates implementation ready but awaiting MCP SDK support.** +Supports MCP capabilities (tools, prompts, resources) as server via HTTP transport and STDIO. Resource templates implementation ready but awaiting MCP SDK support. Installation ------------ @@ -18,16 +18,21 @@ Usage At first, you need to decide whether your application should act as a MCP server or client. Both can be configured in the ``mcp`` section of your ``config/packages/mcp.yaml`` file. -**Act as Server** +Act as Server +~~~~~~~~~~~~~ To use your application as an MCP server, exposing tools, prompts, resources, and resource templates to clients like `Claude Desktop`_, you need to configure in the ``client_transports`` section the transports you want to expose to clients. You can use either STDIO or HTTP. -**Creating MCP Capabilities** +Creating MCP Capabilities +......................... MCP capabilities are automatically discovered using PHP attributes. -**Tools** - Actions that can be executed:: +Tools +^^^^^ + +Actions that can be executed:: use Mcp\Capability\Attribute\McpTool; @@ -40,7 +45,10 @@ MCP capabilities are automatically discovered using PHP attributes. } } -**Prompts** - System instructions for AI context:: +Prompts +^^^^^^^ + +System instructions for AI context:: use Mcp\Capability\Attribute\McpPrompt; @@ -55,7 +63,10 @@ MCP capabilities are automatically discovered using PHP attributes. } } -**Resources** - Static data that can be read:: +Resources +^^^^^^^^^ + +Static data that can be read:: use Mcp\Capability\Attribute\McpResource; @@ -72,7 +83,10 @@ MCP capabilities are automatically discovered using PHP attributes. } } -**Resource Templates** - Dynamic resources with parameters: +Resource Templates +^^^^^^^^^^^^^^^^^^ + +Dynamic resources with parameters: .. note:: @@ -99,7 +113,8 @@ MCP capabilities are automatically discovered using PHP attributes. All capabilities are automatically discovered in the ``src/`` directory when the server starts. -**Transport Types** +Transport Types +............... The MCP Bundle supports two transport types for server communication: @@ -113,8 +128,8 @@ The HTTP transport uses the MCP SDK's ``StreamableHttpTransport`` which supports - CORS headers for cross-origin requests - Proper MCP initialization handshake - -**Act as Client** +Act as Client +~~~~~~~~~~~~~ .. warning:: @@ -209,7 +224,8 @@ Event System The MCP Bundle automatically configures the Symfony EventDispatcher to work with the MCP SDK's event system. This allows you to listen for changes to your server's capabilities. -**Available Events** +Available Events +~~~~~~~~~~~~~~~~ The MCP SDK dispatches the following events when capabilities are registered: @@ -218,7 +234,8 @@ The MCP SDK dispatches the following events when capabilities are registered: - ``Mcp\Event\ResourceTemplateListChangedEvent`` - When a resource template is registered - ``Mcp\Event\PromptListChangedEvent`` - When a prompt is registered -**Listening to Events** +Listening to Events +~~~~~~~~~~~~~~~~~~~ You can create event listeners to respond to capability changes:: diff --git a/src/platform/doc/gemini-server-tools.rst b/src/platform/doc/gemini-server-tools.rst index 8a606395e..e25c09bc2 100644 --- a/src/platform/doc/gemini-server-tools.rst +++ b/src/platform/doc/gemini-server-tools.rst @@ -15,7 +15,8 @@ Gemini provides several server-side tools that can be enabled when calling the m Available Server Tools ---------------------- -**URL Context** +URL Context +~~~~~~~~~~~ The URL Context tool allows Gemini to fetch and analyze content from web pages. This is useful for: @@ -37,8 +38,8 @@ The URL Context tool allows Gemini to fetch and analyze content from web pages. $result = $platform->invoke($model, $messages); - -**Google Search** +Google Search +~~~~~~~~~~~~~ The Google Search tool enables the model to search the web and incorporate search results into its results:: @@ -54,7 +55,8 @@ The Google Search tool enables the model to search the web and incorporate searc $result = $platform->invoke($model, $messages); -**Code Execution** +Code Execution +~~~~~~~~~~~~~~ The Code Execution tool provides a sandboxed environment for running code:: diff --git a/src/platform/doc/index.rst b/src/platform/doc/index.rst index 6ec28740a..e165212cd 100644 --- a/src/platform/doc/index.rst +++ b/src/platform/doc/index.rst @@ -61,13 +61,14 @@ capabilities, and additional options. Usually, bridges to specific providers ext start for vendor-specific models and their capabilities, see ``Symfony\AI\Platform\Bridge\Anthropic\Claude`` or ``Symfony\AI\Platform\Bridge\OpenAi\Gpt``. -**Capabilities** are a list of strings defined by ``Symfony\AI\Platform\Capability``, which can be used to check if a model +Capabilities are a list of strings defined by ``Symfony\AI\Platform\Capability``, which can be used to check if a model supports a specific feature, like ``Capability::INPUT_AUDIO`` or ``Capability::OUTPUT_IMAGE``. -**Options** are additional parameters that can be passed to the model, like ``temperature`` or ``max_tokens``, and are +Options are additional parameters that can be passed to the model, like ``temperature`` or ``max_tokens``, and are usually defined by the specific models and their documentation. -**Model Size Variants** +Model Size Variants +~~~~~~~~~~~~~~~~~~~ For providers like Ollama, you can specify model size variants using a colon notation (e.g., ``qwen3:32b``, ``llama3:7b``). If the exact model name with size variant is not found in the catalog, the system will automatically fall back to the base @@ -85,7 +86,8 @@ You can also combine size variants with query parameters:: // Get model with size variant and query parameters $model = $catalog->getModel('qwen3:32b?temperature=0.5&top_p=0.9'); -**Supported Models & Platforms** +Supported Models & Platforms +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ * **Language Models** * `OpenAI's GPT`_ with `OpenAI`_ and `Azure`_ as Platform @@ -146,7 +148,8 @@ have different content types, like ``Text``, ``Image`` or ``Audio``, and can be Message::ofUser('Please describe this picture?', Image::fromFile('/path/to/image.jpg')), ); -**Message Unique IDs** +Message Unique IDs +~~~~~~~~~~~~~~~~~~ Each message automatically receives a unique identifier (UUID v7) upon creation. This provides several benefits: @@ -200,7 +203,9 @@ Events. Symfony AI supports that by abstracting the conversion and returning a ` In a terminal application this generator can be used directly, but with a web app an additional layer like `Mercure`_ needs to be used. -**Code Examples** +Code Examples +~~~~~~~~~~~~~ + * `Streaming Claude`_ * `Streaming GPT`_ * `Streaming Mistral`_ @@ -227,7 +232,9 @@ Some LLMs also support images as input, which Symfony AI supports as content typ ); $result = $agent->call($messages); -**Code Examples** +Code Examples +~~~~~~~~~~~~~ + * `Binary Image Input with GPT`_ * `Image URL Input with GPT`_ @@ -251,7 +258,8 @@ Similar to images, some LLMs also support audio as input, which is just another ); $result = $agent->call($messages); -**Code Examples** +Code Examples +~~~~~~~~~~~~~ * `Audio Input with GPT`_ @@ -272,7 +280,8 @@ The standalone usage results in an ``Vector`` instance:: dump($vectors[0]->getData()); // returns something like: [0.123, -0.456, 0.789, ...] -**Code Examples** +Code Examples +~~~~~~~~~~~~~ * `Embeddings with OpenAI`_ * `Embeddings with Voyage`_ @@ -322,7 +331,10 @@ It supports returning either: echo $result->asText(); // "Fake result" -**Dynamic Text Results**:: +Dynamic Text Results +~~~~~~~~~~~~~~~~~~~~ + +:: $platform = new InMemoryPlatform( fn($model, $input, $options) => "Echo: {$input}" @@ -331,7 +343,10 @@ It supports returning either: $result = $platform->invoke(new Model('test'), 'Hello AI'); echo $result->asText(); // "Echo: Hello AI" -**Vector Results**:: +Vector Results +~~~~~~~~~~~~~~ + +:: use Symfony\AI\Platform\Result\VectorResult; @@ -342,7 +357,10 @@ It supports returning either: $result = $platform->invoke(new Model('test'), 'vectorize this text'); $vectors = $result->asVectors(); // Returns Vector object with [0.1, 0.2, 0.3, 0.4] -**Binary Results**:: +Binary Results +~~~~~~~~~~~~~~ + +:: use Symfony\AI\Platform\Result\BinaryResult; @@ -353,8 +371,8 @@ It supports returning either: $result = $platform->invoke(new Model('test'), 'generate PDF document'); $binary = $result->asBinary(); // Returns Binary object with content and MIME type - -**Raw Results** +Raw Results +~~~~~~~~~~~ The platform automatically uses the ``getRawResult()`` from any ``ResultInterface`` returned by closures. For string results, it creates an ``InMemoryRawResult`` to simulate real API response metadata. @@ -364,7 +382,8 @@ This allows fast and isolated testing of AI-powered features without relying on This requires `cURL` and the `ext-curl` extension to be installed. -**Code Examples** +Code Examples +~~~~~~~~~~~~~ * `Parallel GPT Calls`_ * `Parallel Embeddings Calls`_ diff --git a/src/platform/doc/vertexai-server-tools.rst b/src/platform/doc/vertexai-server-tools.rst index cd5b8454f..96205e176 100644 --- a/src/platform/doc/vertexai-server-tools.rst +++ b/src/platform/doc/vertexai-server-tools.rst @@ -17,7 +17,8 @@ Vertex AI provides several server-side tools that can be enabled when calling th Available Server Tools ---------------------- -**URL Context** +URL Context +~~~~~~~~~~~ The URL Context tool allows the model to fetch and analyze content from specified web pages. This is useful for: @@ -36,8 +37,9 @@ The URL Context tool allows the model to fetch and analyze content from specifie $result = $platform->invoke($model, $messages); +Grounding with Google Search +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -**Grounding with Google Search** The Grounding tool allows the model to connect its responses to verifiable sources of information, enhancing the reliability of its outputs. More at https://cloud.google.com/vertex-ai/generative-ai/docs/grounding/overview Below is an example of grounding a model's responses using Google Search, which uses publicly-available web data. @@ -61,7 +63,8 @@ More info can be found at https://cloud.google.com/vertex-ai/generative-ai/docs/ $result = $platform->invoke($model, $messages); -**Code Execution** +Code Execution +~~~~~~~~~~~~~~ Executes code in a Google-managed sandbox environment and returns both the code and its output. More info can be found at https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/code-execution diff --git a/src/platform/doc/vertexai.rst b/src/platform/doc/vertexai.rst index 70f52bd3a..e4882b0e9 100644 --- a/src/platform/doc/vertexai.rst +++ b/src/platform/doc/vertexai.rst @@ -18,7 +18,8 @@ To use Vertex AI with Symfony AI Platform, you need to install the platform comp Setup ----- -**Authentication** +Authentication +~~~~~~~~~~~~~~ Vertex AI requires Google Cloud authentication. Follow the `Google cloud authentication guide`_ to set up your credentials. @@ -35,7 +36,8 @@ For ADC, install the Google Cloud SDK and authenticate: For detailed authentication setup, see `Setting up authentication for Vertex AI`_. -**Environment Variables** +Environment Variables +~~~~~~~~~~~~~~~~~~~~~ Configure your Google Cloud project and location: @@ -94,7 +96,8 @@ Common model availability: * **europe-west1**: Good model availability * **global**: Limited model availability, some newer models may not be available -**Troubleshooting Model Availability** +Troubleshooting Model Availability +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ If you encounter an error like:: @@ -107,7 +110,8 @@ This typically means: 3. Use an alternative model that's available in your location 4. Check the `Google Cloud Console for Vertex AI`_ for model availability in your region -**Checking Model Availability** +Checking Model Availability +^^^^^^^^^^^^^^^^^^^^^^^^^^^ You can check which models are available in your location using the Google Cloud Console or gcloud CLI::