diff --git a/docs/bundles/ai-bundle.rst b/docs/bundles/ai-bundle.rst index 6c20903e0..06495c138 100644 --- a/docs/bundles/ai-bundle.rst +++ b/docs/bundles/ai-bundle.rst @@ -6,6 +6,7 @@ Symfony integration bundle for Symfony AI components. Integrating: * `Symfony AI Agent`_ +* `Symfony AI Chat`_ * `Symfony AI Platform`_ * `Symfony AI Store`_ @@ -33,7 +34,7 @@ Basic Example with OpenAI default: model: 'gpt-4o-mini' -Advanced Example with multiple agents +Advanced Example with Multiple Agents ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. code-block:: yaml @@ -365,7 +366,8 @@ Then configure the prompt with translation enabled: enable_translation: true translation_domain: 'ai_prompts' # Optional: specify translation domain -The system prompt text will be automatically translated using the configured translator service. If no translation domain is specified, the default domain will be used. +The system prompt text will be automatically translated using the configured translator service. +If no translation domain is specified, the default domain will be used. Memory Provider Configuration ----------------------------- @@ -434,7 +436,8 @@ Memory can work independently or alongside the system prompt: Custom Memory Provider Requirements ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -When using a service reference, the memory service must implement the ``Symfony\AI\Agent\Memory\MemoryProviderInterface``:: +When using a service reference, the memory service must implement the +:class:`Symfony\\AI\\Agent\\Memory\\MemoryProviderInterface`:: use Symfony\AI\Agent\Input; use Symfony\AI\Agent\Memory\Memory; @@ -446,36 +449,13 @@ When using a service reference, the memory service must implement the ``Symfony\ { // Return an array of Memory objects containing relevant conversation history return [ - new Memory('Previous conversation context...'), + new Memory('Username: OskarStark'), + new Memory('Age: 40'), new Memory('User preferences: prefers concise answers'), ]; } } -How Memory Works -~~~~~~~~~~~~~~~~ - -The system uses explicit configuration to determine memory behavior: - -Static Memory Processing -........................ - - -1. When you provide a string value (e.g., ``memory: 'some text'``) -2. The system creates a ``StaticMemoryProvider`` automatically -3. Content is formatted as "## Static Memory" with the provided text -4. This memory is consistently available across all conversations - -Dynamic Memory Processing -......................... - -1. When you provide an array with a service key (e.g., ``memory: {service: 'my_service'}``) -2. The ``MemoryInputProcessor`` uses the specified service directly -3. The service's ``loadMemory()`` method is called before processing user input -4. Dynamic memory content is injected based on the current context - -In both cases, memory content is prepended to the system message, allowing the agent to utilize the context effectively. - Multi-Agent Orchestration ------------------------- @@ -581,11 +561,6 @@ Example of creating a Handoff in PHP:: when: ['code', 'debug', 'implementation', 'refactor', 'programming'] ); - $documentationHandoff = new Handoff( - to: $documentationAgent, - when: ['document', 'readme', 'explain', 'tutorial'] - ); - The ``fallback`` parameter (required) specifies an agent to handle requests that don't match any handoff rules. This ensures all requests have a proper handler. How It Works @@ -636,7 +611,7 @@ This is useful for testing platform configurations and quick interactions with A ``ai:agent:call`` ~~~~~~~~~~~~~~~~~ -The ``ai:agent:call`` command (alias: ``ai:chat``) provides an interactive chat interface to communicate with configured agents. +The ``ai:agent:call`` command provides an interactive chat interface to communicate with configured agents. This is useful for testing agent configurations, tools, and conversational flows. .. code-block:: terminal @@ -724,7 +699,7 @@ Usage Agent Service ~~~~~~~~~~~~~ -Use the `Agent` service to leverage models and tools:: +Use the :class:`Symfony\\AI\\Agent\\Agent` service to leverage models and tools:: use Symfony\AI\Agent\AgentInterface; use Symfony\AI\Platform\Message\Message; @@ -751,11 +726,11 @@ Use the `Agent` service to leverage models and tools:: Register Processors ~~~~~~~~~~~~~~~~~~~ -By default, all services implementing the ``InputProcessorInterface`` or the -``OutputProcessorInterface`` interfaces are automatically applied to every ``Agent``. +By default, all services implementing the :class:`Symfony\\AI\\Agent\\InputProcessorInterface` or the +:class:`Symfony\\AI\\Agent\\OutputProcessorInterface` interfaces are automatically applied to every :class:`Symfony\\AI\\Agent\\Agent`. -This behavior can be overridden/configured with the ``#[AsInputProcessor]`` and -the ``#[AsOutputProcessor]`` attributes:: +This behavior can be overridden/configured with the :class:`Symfony\\AI\\AiBundle\\Attribute\\AsInputProcessor` and +the :class:`Symfony\\AI\\AiBundle\\Attribute\\AsOutputProcessor` attributes:: use Symfony\AI\Agent\Input; use Symfony\AI\Agent\InputProcessorInterface; @@ -804,7 +779,7 @@ To use existing tools, you can register them as a service: Symfony\AI\Agent\Toolbox\Tool\Brave: $apiKey: '%env(BRAVE_API_KEY)%' -Custom tools can be registered by using the ``#[AsTool]`` attribute:: +Custom tools can be registered by using the :class:`Symfony\\AI\\Agent\\Toolbox\\Attribute\\AsTool` attribute:: use Symfony\AI\Agent\Toolbox\Attribute\AsTool; @@ -838,8 +813,8 @@ To inject only specific tools, list them in the configuration: tools: - 'Symfony\AI\Agent\Toolbox\Tool\SimilaritySearch' -To restrict the access to a tool, you can use the ``IsGrantedTool`` attribute, which -works similar to ``IsGranted`` attribute in `symfony/security-http`. For this to work, +To restrict the access to a tool, you can use the :class:`Symfony\\AI\\Agent\\Attribute\\IsGrantedTool` attribute, which +works similar to :class:`Symfony\\Component\\Security\\Http\\Attribute\\IsGranted` attribute in `symfony/security-http`. For this to work, make sure you have `symfony/security-core` installed in your project. :: @@ -855,7 +830,8 @@ make sure you have `symfony/security-core` installed in your project. return 'ACME Corp.'; } } -The attribute ``IsGrantedTool`` can be added on class- or method-level - even multiple + +The attribute :class:`Symfony\\AI\\Agent\\Attribute\\IsGrantedTool` can be added on class- or method-level - even multiple times. If multiple attributes apply to one tool call, a logical AND is used and all access decisions have to grant access. @@ -898,14 +874,6 @@ The token usage information can be accessed from the result metadata:: } } -Supported Platforms -~~~~~~~~~~~~~~~~~~~ - -Token usage tracking is currently supported, and by default enabled, for the following platforms: - -* **OpenAI**: Tracks all token types including cached and thinking tokens -* **Mistral**: Tracks basic token usage and rate limit information - Disable Tracking ~~~~~~~~~~~~~~~~ @@ -916,8 +884,8 @@ To disable token usage tracking for an agent, set the ``track_token_usage`` opti ai: agent: my_agent: - track_token_usage: false model: 'gpt-4o-mini' + track_token_usage: false Vectorizers ----------- @@ -990,5 +958,6 @@ The profiler panel provides insights into the agent's execution: .. _`Symfony AI Agent`: https://github.com/symfony/ai-agent +.. _`Symfony AI Chat`: https://github.com/symfony/ai-chat .. _`Symfony AI Platform`: https://github.com/symfony/ai-platform .. _`Symfony AI Store`: https://github.com/symfony/ai-store diff --git a/docs/components/agent.rst b/docs/components/agent.rst index 7b666f05d..9c8c4b63e 100644 --- a/docs/components/agent.rst +++ b/docs/components/agent.rst @@ -1,14 +1,13 @@ Symfony AI - Agent Component ============================ -The Agent component provides a framework for building AI agents that, sits on top of the Platform and Store components, -allowing you to create agents that can interact with users, perform tasks, and manage workflows. +The Agent component provides a framework for building AI agents that, +sits on top of the Platform and Store components, allowing you to create +agents that can interact with users, perform tasks, and manage workflows. Installation ------------ -Install the component using Composer: - .. code-block:: terminal $ composer require symfony/ai-agent @@ -16,26 +15,26 @@ Install the component using Composer: Basic Usage ----------- -To instantiate an agent, you need to pass a ``Symfony\AI\Platform\PlatformInterface`` and a -``Symfony\AI\Platform\Model`` instance to the ``Symfony\AI\Agent\Agent`` class:: +To instantiate an agent, you need to pass a :class:`Symfony\\AI\\Platform\\PlatformInterface` and a +:class:`Symfony\\AI\\Platform\\Model` instance to the :class:`Symfony\\AI\\Agent\\Agent` class:: use Symfony\AI\Agent\Agent; use Symfony\AI\Platform\Bridge\OpenAi\Gpt; use Symfony\AI\Platform\Bridge\OpenAi\PlatformFactory; $platform = PlatformFactory::create($apiKey); - $model = new Gpt(Gpt::GPT_4O_MINI); + $model = 'gpt-4o-mini'; $agent = new Agent($platform, $model); -You can then run the agent with a ``Symfony\AI\Platform\Message\MessageBagInterface`` instance as input and an optional +You can then run the agent with a :class:`Symfony\\AI\\Platform\\Message\\MessageBagInterface` instance as input and an optional array of options:: use Symfony\AI\Agent\Agent; use Symfony\AI\Platform\Message\Message; use Symfony\AI\Platform\Message\MessageBag; - // Platform & LLM instantiation + // Platform instantiation $agent = new Agent($platform, $model); $input = new MessageBag( @@ -67,7 +66,7 @@ Tool calling can be enabled by registering the processors in the agent:: use Symfony\AI\Agent\Toolbox\AgentProcessor; use Symfony\AI\Agent\Toolbox\Toolbox; - // Platform & LLM instantiation + // Platform instantiation $yourTool = new YourTool(); @@ -76,7 +75,7 @@ Tool calling can be enabled by registering the processors in the agent:: $agent = new Agent($platform, $model, inputProcessors: [$toolProcessor], outputProcessors: [$toolProcessor]); -Custom tools can basically be any class, but must configure by the ``#[AsTool]`` attribute:: +Custom tools can basically be any class, but must configure by the :class:`Symfony\\AI\\Agent\\Toolbox\\Attribute\\AsTool` attribute:: use Symfony\AI\Toolbox\Attribute\AsTool; @@ -98,7 +97,7 @@ JsonSerializable interface, to JSON strings for you. So you can return arrays or Tool Methods ~~~~~~~~~~~~ -You can configure the method to be called by the LLM with the #[AsTool] attribute and have multiple tools per class:: +You can configure the method to be called by the LLM with the :class:`Symfony\\AI\\Agent\\Toolbox\\Attribute\\AsTool` attribute and have multiple tools per class:: use Symfony\AI\Toolbox\Attribute\AsTool; @@ -128,14 +127,14 @@ You can configure the method to be called by the LLM with the #[AsTool] attribut Tool Parameters ~~~~~~~~~~~~~~~ -Symfony AI generates a JSON Schema representation for all tools in the Toolbox based on the #[AsTool] attribute and +Symfony AI generates a JSON Schema representation for all tools in the :class:`Symfony\\AI\\Agent\\Toolbox\\Toolbox` based on the :class:`Symfony\\AI\\Agent\\Toolbox\\Attribute\\AsTool` attribute and method arguments and param comments in the doc block. Additionally, JSON Schema support validation rules, which are -partially support by LLMs like GPT. +partially supported by LLMs like GPT. -Parameter Validation with #[With] Attribute -........................................... +Parameter Validation with ``#[With]`` Attribute +............................................... -To leverage JSON Schema validation rules, configure the ``#[With]`` attribute on the method arguments of your tool:: +To leverage JSON Schema validation rules, configure the :class:`Symfony\\AI\\Platform\\Contract\\JsonSchema\\Attribute\\With` attribute on the method arguments of your tool:: use Symfony\AI\Agent\Toolbox\Attribute\AsTool; use Symfony\AI\Platform\Contract\JsonSchema\Attribute\With; @@ -160,12 +159,12 @@ To leverage JSON Schema validation rules, configure the ``#[With]`` attribute on } } -See attribute class ``Symfony\AI\Platform\Contract\JsonSchema\Attribute\With`` for all available options. +See attribute class :class:`Symfony\\AI\\Platform\\Contract\\JsonSchema\\Attribute\\With` for all available options. Automatic Enum Validation ......................... -For PHP backed enums, Symfony AI provides automatic validation without requiring any ``#[With]`` attributes:: +For PHP backed enums, automatic validation without requiring any :class:`Symfony\\AI\\Platform\\Contract\\JsonSchema\\Attribute\\With` attribute is supported:: enum Priority: int { @@ -205,13 +204,13 @@ This eliminates the need for manual ``#[With(enum: [...])]`` attributes when usi .. note:: - Please be aware, that this is only converted in a JSON Schema for the LLM to respect, but not validated by Symfony AI. + Please be aware, that this is only converted in a JSON Schema for the LLM to respect, but not validated by Symfony AI itself. Third-Party Tools ~~~~~~~~~~~~~~~~~ -In some cases you might want to use third-party tools, which are not part of your application. Adding the ``#[AsTool]`` -attribute to the class is not possible in those cases, but you can explicitly register the tool in the MemoryFactory:: +In some cases you might want to use third-party tools, which are not part of your application. Adding the :class:`Symfony\\AI\\Agent\\Toolbox\\Attribute\\AsTool` +attribute to the class is not possible in those cases, but you can explicitly register the tool in the :class:`Symfony\\AI\\Agent\\Toolbox\\ToolFactory\\MemoryToolFactory`:: use Symfony\AI\Agent\Toolbox\Toolbox; use Symfony\AI\Agent\Toolbox\ToolFactory\MemoryToolFactory; @@ -225,7 +224,7 @@ attribute to the class is not possible in those cases, but you can explicitly re Please be aware that not all return types are supported by the toolbox, so a decorator might still be needed. -This can be combined with the ChainFactory which enables you to use explicitly registered tools and ``#[AsTool]`` tagged +This can be combined with the :class:`Symfony\\AI\\Agent\\Toolbox\\ToolFactory\\ChainFactory` which enables you to use explicitly registered tools and :class:`Symfony\\AI\\Agent\\Toolbox\\Attribute\\AsTool` tagged tools in the same chain - which even enables you to overwrite the pre-existing configuration of a tool:: use Symfony\AI\Agent\Toolbox\Toolbox; @@ -263,7 +262,7 @@ Fault Tolerance ~~~~~~~~~~~~~~~ To gracefully handle errors that occur during tool calling, e.g. wrong tool names or runtime errors, you can use the -``FaultTolerantToolbox`` as a decorator for the Toolbox. It will catch the exceptions and return readable error messages +:class:`Symfony\\AI\\Agent\\Toolbox\\FaultTolerantToolbox` as a decorator for the :class:`Symfony\\AI\\Agent\\Toolbox\\Toolbox`. It will catch the exceptions and return readable error messages to the LLM:: use Symfony\AI\Agent\Agent; @@ -277,17 +276,19 @@ to the LLM:: $agent = new Agent($platform, $model, inputProcessor: [$toolProcessor], outputProcessor: [$toolProcessor]); -If you want to expose the underlying error to the LLM, you can throw a custom exception that implements `ToolExecutionExceptionInterface`:: +If you want to expose the underlying error to the LLM, you can throw a custom exception that implements :class:`Symfony\\AI\\Agent\\Toolbox\\Exception\\ToolExecutionExceptionInterface`:: use Symfony\AI\Agent\Toolbox\Exception\ToolExecutionExceptionInterface; class EntityNotFoundException extends \RuntimeException implements ToolExecutionExceptionInterface { - public function __construct(private string $entityName, private int $id) - { + public function __construct( + private string $entityName, + private int $id, + ){ } - public function getToolCallResult(): mixed + public function getToolCallResult(): string { return \sprintf('No %s found with id %d', $this->entityName, $this->id); } @@ -296,13 +297,18 @@ If you want to expose the underlying error to the LLM, you can throw a custom ex #[AsTool('get_user_age', 'Get age by user id')] class GetUserAge { - public function __construct(private UserRepository $userRepository) - { + public function __construct( + private UserRepository $userRepository, + ){ } public function __invoke(int $id): int { - $user = $this->userRepository->find($id) ?? throw new EntityNotFoundException('user', $id); + $user = $this->userRepository->find($id) + + if (null === $user) { + throw new EntityNotFoundException('user', $id); + } return $user->getAge(); } @@ -319,9 +325,9 @@ tools option with a list of tool names:: Tool Result Interception ~~~~~~~~~~~~~~~~~~~~~~~~ -To react to the result of a tool, you can implement an EventListener or EventSubscriber, that listens to the -``ToolCallsExecuted`` event. This event is dispatched after the Toolbox executed all current tool calls and enables you -to skip the next LLM call by setting a result yourself:: +To react to the result of a tool, you can implement an EventListener, that listens to the +:class:`Symfony\\AI\\Agent\\Toolbox\\Event\\ToolCallsExecuted` event. This event is dispatched after the :class:`Symfony\\AI\\Agent\\Toolbox\\Toolbox` executed all current +tool calls and enables you to skip the next LLM call by setting a result yourself:: $eventDispatcher->addListener(ToolCallsExecuted::class, function (ToolCallsExecuted $event): void { foreach ($event->toolCallResults as $toolCallResult) { @@ -335,7 +341,9 @@ Tool Call Lifecycle Events ~~~~~~~~~~~~~~~~~~~~~~~~~~ If you need to react more granularly to the lifecycle of individual tool calls, you can listen to the -``ToolCallArgumentsResolved``, ``ToolCallSucceeded`` and ``ToolCallFailed`` events. These are dispatched at different stages:: +:class:`Symfony\\AI\\Agent\\Toolbox\\Event\\ToolCallArgumentsResolved`, +:class:`Symfony\\AI\\Agent\\Toolbox\\Event\\ToolCallSucceeded` and +:class:`Symfony\\AI\\Agent\\Toolbox\\Event\\ToolCallFailed` events. These are dispatched at different stages:: $eventDispatcher->addListener(ToolCallArgumentsResolved::class, function (ToolCallArgumentsResolved $event): void { // Let the client know, that the tool $event->toolCall->name was executed @@ -352,25 +360,26 @@ If you need to react more granularly to the lifecycle of individual tool calls, Keeping Tool Messages ~~~~~~~~~~~~~~~~~~~~~ -Sometimes you might wish to keep the tool messages (AssistantMessage containing the toolCalls and ToolCallMessage -containing the result) in the context. Enable the keepToolMessages flag of the toolbox' AgentProcessor to ensure those -messages will be added to your MessageBag:: +Sometimes you might wish to keep the tool messages (:class:`Symfony\\AI\\Platform\\Message\\AssistantMessage` containing the ``toolCalls`` and :class:`Symfony\\AI\\Platform\\Message\\ToolCallMessage` +containing the result) in the context. Enable the ``keepToolMessages`` flag of the toolbox' :class:`Symfony\\AI\\Agent\\Toolbox\\AgentProcessor` +to ensure those messages will be added to your :class:`Symfony\\AI\\Platform\\Message\\MessageBag`:: use Symfony\AI\Agent\Toolbox\AgentProcessor; use Symfony\AI\Agent\Toolbox\Toolbox; - // Platform & LLM instantiation + // Platform instantiation + $messages = new MessageBag( Message::forSystem(<<submit(Message::ofUser('Hello')); @@ -43,6 +40,7 @@ for adding messages to the message store, and returning the messages from a stor This leads to a store implementing two methods:: + use Symfony\AI\Platform\Message\MessageBag; use Symfony\AI\Store\MessageStoreInterface; class MyCustomStore implements MessageStoreInterface @@ -89,7 +87,8 @@ Commands -------- While using the `Chat` component in your Symfony application along with the ``AiBundle``, -you can use the ``bin/console ai:message-store:setup`` command to initialize the message store and ``bin/console ai:message-store:drop`` to clean up the message store: +you can use the ``bin/console ai:message-store:setup`` command to initialize the message +store and ``bin/console ai:message-store:drop`` to clean up the message store: .. code-block:: yaml diff --git a/docs/components/platform.rst b/docs/components/platform.rst index e165212cd..9b811f933 100644 --- a/docs/components/platform.rst +++ b/docs/components/platform.rst @@ -1,13 +1,12 @@ Symfony AI - Platform Component =============================== -The Platform component provides an abstraction for interacting with different models, their providers and contracts. +The Platform component provides an abstraction for interacting with different +models, their providers and contracts. Installation ------------ -Install the component using Composer: - .. code-block:: terminal $ composer require symfony/ai-platform @@ -23,8 +22,9 @@ specific use cases or performance requirements. Usage ----- -The instantiation of the ``Symfony\AI\Platform\Platform`` class is usually delegated to a provider-specific factory, -with a provider being OpenAI, Azure, Google, Replicate, and others. +The instantiation of the :class:`Symfony\\AI\\Platform\Platform` class is +usually delegated to a provider-specific factory, with a provider being +OpenAI, Anthropic, Google, Replicate, and others. For example, to use the OpenAI provider, you would typically do something like this:: @@ -32,23 +32,15 @@ For example, to use the OpenAI provider, you would typically do something like t use Symfony\AI\Platform\Bridge\OpenAi\Gpt; use Symfony\AI\Platform\Bridge\OpenAi\PlatformFactory; - // Platform $platform = PlatformFactory::create(env('OPENAI_API_KEY')); - // Embeddings Model - $embeddings = new Embeddings(Embeddings::TEXT_3_SMALL); - - // Language Model in version gpt-4o-mini - $model = new Gpt(Gpt::GPT_4O_MINI); - -And with a ``Symfony\AI\Platform\PlatformInterface`` instance, and a ``Symfony\AI\Platform\Model`` instance, you can now -use the platform to interact with the AI model:: +With this :class:`Symfony\\AI\\Platform\PlatformInterface` instance you can now interact with the LLM:: // Generate a vector embedding for a text, returns a Symfony\AI\Platform\Result\VectorResult $vectorResult = $platform->invoke($embeddings, 'What is the capital of France?'); // Generate a text completion with GPT, returns a Symfony\AI\Platform\Result\TextResult - $result = $platform->invoke($model, new MessageBag(Message::ofUser('What is the capital of France?'))); + $result = $platform->invoke('gpt-4o-mini', new MessageBag(Message::ofUser('What is the capital of France?'))); Depending on the model and its capabilities, different types of inputs and outputs are supported, which results in a very flexible and powerful interface for working with AI models. @@ -56,12 +48,11 @@ very flexible and powerful interface for working with AI models. Models ------ -The component provides a model base class ``Symfony\AI\Platform\Model`` which is a combination of a model name, a set of +The component provides a model base class :class:`Symfony\\AI\\Platform\\Model` which is a combination of a model name, a set of capabilities, and additional options. Usually, bridges to specific providers extend this base class to provide a quick -start for vendor-specific models and their capabilities, see ``Symfony\AI\Platform\Bridge\Anthropic\Claude`` or -``Symfony\AI\Platform\Bridge\OpenAi\Gpt``. +start for vendor-specific models and their capabilities. -Capabilities are a list of strings defined by ``Symfony\AI\Platform\Capability``, which can be used to check if a model +Capabilities are a list of strings defined by :class:`Symfony\\AI\\Platform\\Capability`, which can be used to check if a model supports a specific feature, like ``Capability::INPUT_AUDIO`` or ``Capability::OUTPUT_IMAGE``. Options are additional parameters that can be passed to the model, like ``temperature`` or ``max_tokens``, and are @@ -112,15 +103,14 @@ Supported Models & Platforms * All models provided by `HuggingFace`_ can be listed with a command in the examples folder, and also filtered, e.g. ``php examples/huggingface/_model-listing.php --provider=hf-inference --task=object-detection`` -See `GitHub`_ for planned support of other models and platforms. - Options ------- -The third parameter of the ``invoke`` method is an array of options, which basically wraps the options of the -corresponding model and platform, like ``temperature`` or ``stream``:: +The third parameter of the :method:`Symfony\\AI\\Platform\\PlatformInterface::invoke` +method is an array of options, which basically wraps the options of the corresponding +model and platform, like ``temperature`` or ``max_tokens``:: - $result = $platform->invoke($model, $input, [ + $result = $platform->invoke('gpt-4o-mini', $input, [ 'temperature' => 0.7, 'max_tokens' => 100, ]); @@ -132,11 +122,12 @@ corresponding model and platform, like ``temperature`` or ``stream``:: Language Models and Messages ---------------------------- -One central feature of the Platform component is the support for language models and easing the interaction with them. -This is supported by providing an extensive set of data classes around the concept of messages and their content. +One central feature of the Platform component is the support for language +models and easing the interaction with them. This is supported by providing +an extensive set of data classes around the concept of messages and their content. -Messages can be of different types, most importantly ``UserMessage``, ``SystemMessage``, or ``AssistantMessage``, can -have different content types, like ``Text``, ``Image`` or ``Audio``, and can be grouped into a ``MessageBag``:: +Messages can be of different types, most importantly :class:`Symfony\\AI\\Platform\\Message\\UserMessage`, :class:`Symfony\\AI\\Platform\\Message\\SystemMessage`, or :class:`Symfony\\AI\\Platform\\Message\\AssistantMessage`, can +have different content types, like :class:`Symfony\\AI\\Platform\\Message\\Content\\Text`, :class:`Symfony\\AI\\Platform\\Message\\Content\\Image` or :class:`Symfony\\AI\\Platform\\Message\\Content\\Audio`, and can be grouped into a :class:`Symfony\\AI\\Platform\\Message\\MessageBag`:: use Symfony\AI\Platform\Message\Content\Image; use Symfony\AI\Platform\Message\Message; @@ -179,7 +170,7 @@ Result Streaming ---------------- Since LLMs usually generate a result word by word, most of them also support streaming the result using Server Side -Events. Symfony AI supports that by abstracting the conversion and returning a ``Generator`` as content of the result:: +Events. Symfony AI supports that by abstracting the conversion and returning a :class:`Generator` as content of the result:: use Symfony\AI\Agent\Agent; use Symfony\AI\Message\Message; @@ -200,8 +191,10 @@ Events. Symfony AI supports that by abstracting the conversion and returning a ` echo $word; } -In a terminal application this generator can be used directly, but with a web app an additional layer like `Mercure`_ -needs to be used. +.. note:: + + To be able to use streaming in your web application, + an additional layer like `Mercure`_ is needed. Code Examples ~~~~~~~~~~~~~ @@ -213,7 +206,7 @@ Code Examples Image Processing ---------------- -Some LLMs also support images as input, which Symfony AI supports as content type within the ``UserMessage``:: +Some LLMs also support images as input, which Symfony AI supports as content type within the :class:`Symfony\\AI\\Platform\\Message\\UserMessage`:: use Symfony\AI\Platform\Message\Content\Image; use Symfony\AI\Platform\Message\Message; @@ -242,7 +235,7 @@ Audio Processing ---------------- Similar to images, some LLMs also support audio as input, which is just another content type within the -``UserMessage``:: +:class:`Symfony\\AI\\Platform\\Message\\UserMessage`:: use Symfony\AI\Platform\Message\Content\Audio; use Symfony\AI\Platform\Message\Message; @@ -268,15 +261,13 @@ Embeddings Creating embeddings of word, sentences, or paragraphs is a typical use case around the interaction with LLMs. -The standalone usage results in an ``Vector`` instance:: +The standalone usage results in a :class:`Symfony\\AI\\Store\\Vector` instance:: use Symfony\AI\Platform\Bridge\OpenAi\Embeddings; - // Initialize Platform - - $embeddings = new Embeddings($platform, Embeddings::TEXT_3_SMALL); + // Initialize platform - $vectors = $platform->invoke($embeddings, $textInput)->asVectors(); + $vectors = $platform->invoke('text-embedding-3-small', $textInput)->asVectors(); dump($vectors[0]->getData()); // returns something like: [0.123, -0.456, 0.789, ...] @@ -292,7 +283,8 @@ Server Tools Some platforms provide built-in server-side tools for enhanced capabilities without custom implementations: -1. **[Gemini](gemini-server-tools.rst)** - URL Context, Google Search, Code Execution +1. **[Gemini](platform/gemini-server-tools.rst)** - URL Context, Google Search, Code Execution +1. **[VertexAI](platform/vertexai-server-tools.rst)** - URL Context, Google Search, Code Execution For complete Vertex AI setup and usage guide, see :doc:`vertexai`. @@ -302,10 +294,10 @@ Parallel Platform Calls Since the ``Platform`` sits on top of Symfony's HttpClient component, it supports multiple model calls in parallel, which can be useful to speed up the processing:: - // Initialize Platform & Model + // Initialize Platform foreach ($inputs as $input) { - $results[] = $platform->invoke($model, $input); + $results[] = $platform->invoke('gpt-4o-mini', $input); } foreach ($results as $result) { @@ -315,19 +307,20 @@ which can be useful to speed up the processing:: Testing Tools ------------- -For unit or integration testing, you can use the `InMemoryPlatform`, which implements `PlatformInterface` without calling external APIs. +For unit or integration testing, you can use the :class:`Symfony\\AI\\Platform\\InMemoryPlatform`, +which implements :class:`Symfony\\AI\\Platform\\PlatformInterface` without calling external APIs. It supports returning either: - A fixed string result -- A callable that dynamically returns a simple string or any ``ResultInterface`` based on the model, input, and options:: +- A callable that dynamically returns a simple string or any :class:`Symfony\\AI\\Platform\\Result\\ResultInterface` based on the model, input, and options:: use Symfony\AI\Platform\InMemoryPlatform; use Symfony\AI\Platform\Model; $platform = new InMemoryPlatform('Fake result'); - $result = $platform->invoke(new Model('test'), 'What is the capital of France?'); + $result = $platform->invoke('gpt-4o-mini', 'What is the capital of France?'); echo $result->asText(); // "Fake result" @@ -340,7 +333,7 @@ Dynamic Text Results fn($model, $input, $options) => "Echo: {$input}" ); - $result = $platform->invoke(new Model('test'), 'Hello AI'); + $result = $platform->invoke('gpt-4o-mini', 'Hello AI'); echo $result->asText(); // "Echo: Hello AI" Vector Results @@ -354,7 +347,7 @@ Vector Results fn() => new VectorResult(new Vector([0.1, 0.2, 0.3, 0.4])) ); - $result = $platform->invoke(new Model('test'), 'vectorize this text'); + $result = $platform->invoke('gpt-4o-mini', 'vectorize this text'); $vectors = $result->asVectors(); // Returns Vector object with [0.1, 0.2, 0.3, 0.4] Binary Results @@ -368,7 +361,7 @@ Binary Results fn() => new BinaryResult('fake-pdf-content', 'application/pdf') ); - $result = $platform->invoke(new Model('test'), 'generate PDF document'); + $result = $platform->invoke('gpt-4o-mini', 'generate PDF document'); $binary = $result->asBinary(); // Returns Binary object with content and MIME type Raw Results @@ -423,7 +416,6 @@ Code Examples .. _`OpenAI's DallĀ·E`: https://platform.openai.com/docs/guides/image-generation .. _`OpenAI's Whisper`: https://platform.openai.com/docs/guides/speech-to-text .. _`HuggingFace`: https://huggingface.co/ -.. _`GitHub`: https://github.com/symfony/ai/issues/16 .. _`Mercure`: https://mercure.rocks/ .. _`Streaming Claude`: https://github.com/symfony/ai/blob/main/examples/anthropic/stream.php .. _`Streaming GPT`: https://github.com/symfony/ai/blob/main/examples/openai/stream.php diff --git a/docs/components/platform/gemini-server-tools.rst b/docs/components/platform/gemini-server-tools.rst index 0dcc68a6a..7a8e021d3 100644 --- a/docs/components/platform/gemini-server-tools.rst +++ b/docs/components/platform/gemini-server-tools.rst @@ -26,63 +26,60 @@ The URL Context tool allows Gemini to fetch and analyze content from web pages. :: - $model = new Gemini('gemini-2.5-pro-preview-03-25', [ - 'server_tools' => [ - 'url_context' => true - ] - ]); - $messages = new MessageBag( Message::ofUser('What was the 12 month Euribor rate a week ago based on https://www.euribor-rates.eu/en/current-euribor-rates/4/euribor-rate-12-months/') ); - $result = $platform->invoke($model, $messages); + $result = $platform->invoke('gemini-2.5-pro-preview-03-25, $messages, [ + 'server_tools' => [ + 'url_context' => true, + ] + ]); Google Search ~~~~~~~~~~~~~ The Google Search tool enables the model to search the web and incorporate search results into its results:: - $model = new Gemini('gemini-2.5-pro-preview-03-25', [ - 'server_tools' => [ - 'google_search' => true - ] - ]); - $messages = new MessageBag( Message::ofUser('What are the latest developments in quantum computing?') ); - $result = $platform->invoke($model, $messages); + $result = $platform->invoke('gemini-2.5-pro-preview-03-25', $messages, [ + 'server_tools' => [ + 'google_search' => true, + ] + ]); Code Execution ~~~~~~~~~~~~~~ The Code Execution tool provides a sandboxed environment for running code:: - $model = new Gemini('gemini-2.5-pro-preview-03-25', [ - 'server_tools' => [ - 'code_execution' => true - ] - ]); - $messages = new MessageBag( Message::ofUser('Calculate the factorial of 20 and show me the code') ); - $result = $platform->invoke($model, $messages); - + $result = $platform->invoke('gemini-2.5-pro-preview-03-25', [ + 'server_tools' => [ + 'code_execution' => true, + ] + ]); Using Multiple Server Tools --------------------------- You can enable multiple server tools simultaneously:: - $model = new Gemini('gemini-2.5-pro-preview-03-25', [ + $messages = new MessageBag( + Message::ofUser('Calculate the factorial of 20 and show me the code') + ); + + $result = $platform->invoke('gemini-2.5-pro-preview-03-25', [ 'server_tools' => [ 'url_context' => true, 'google_search' => true, - 'code_execution' => true + 'code_execution' => true, ] ]); diff --git a/docs/components/platform/vertexai-server-tools.rst b/docs/components/platform/vertexai-server-tools.rst index 9ee6c9b2b..d47fe9cb1 100644 --- a/docs/components/platform/vertexai-server-tools.rst +++ b/docs/components/platform/vertexai-server-tools.rst @@ -29,13 +29,13 @@ The URL Context tool allows the model to fetch and analyze content from specifie :: - $model = new VertexAi\Gemini\Model('gemini-2.5-pro', ['server_tools' => ['url_context' => true]]); - $messages = new MessageBag( Message::ofUser("Based on https://www.euribor-rates.eu/en/current-euribor-rates/4/euribor-rate-12-months/, what is the latest 12-month Euribor rate?"), ); - $result = $platform->invoke($model, $messages); + $result = $platform->invoke('gemini-2.5-pro', $messages, [ + 'server_tools' => ['url_context' => true], + ]); Grounding with Google Search ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -47,17 +47,15 @@ Ground a model's responses using Google Search, which uses publicly-available we :: - $model = new VertexAi\Gemini\Model('gemini-2.5-pro', [ - 'server_tools' => [ - 'google_search' => true, - ], - ]); - $messages = new MessageBag( Message::ofUser('What are the top breakthroughs in AI in 2025 so far?') ); - $result = $platform->invoke($model, $messages); + $result = $platform->invoke('gemini-2.5-pro', $messages, [ + 'server_tools' => [ + 'google_search' => true, + ], + ]); Code Execution ~~~~~~~~~~~~~~ @@ -67,17 +65,15 @@ More info can be found at https://cloud.google.com/vertex-ai/generative-ai/docs/ :: - $model = new Gemini('gemini-2.5-pro-preview-03-25', [ - 'server_tools' => [ - 'code_execution' => true, - ], - ]); - $messages = new MessageBag( Message::ofUser('Write Python code to calculate the 50th Fibonacci number and run it') ); - $result = $platform->invoke($model, $messages); + $result = $platform->invoke('gemini-2.5-pro-preview-03-25', $messages, [ + 'server_tools' => [ + 'code_execution' => true, + ], + ]); Using Multiple Server Tools @@ -85,7 +81,11 @@ Using Multiple Server Tools You can enable multiple tools in a single request:: - $model = new Gemini('gemini-2.5-pro-preview-03-25', [ + $messages = new MessageBag( + Message::ofUser('Write Python code to calculate the 50th Fibonacci number and run it') + ); + + $result = $platform->invoke('gemini-2.5-pro-preview-03-25', $messages, [ 'server_tools' => [ 'google_search' => true, 'code_execution' => true, diff --git a/docs/components/platform/vertexai.rst b/docs/components/platform/vertexai.rst index e4882b0e9..86bacf93a 100644 --- a/docs/components/platform/vertexai.rst +++ b/docs/components/platform/vertexai.rst @@ -62,32 +62,19 @@ Basic usage example:: $httpClient ); - $model = new Model(Model::GEMINI_2_5_FLASH); - $messages = new MessageBag( Message::ofUser('Hello, how are you?') ); - $result = $platform->invoke($model, $messages); + $result = $platform->invoke('gemini-2.5-flash', $messages); echo $result->getContent(); -Available Models ----------------- - -The VertexAI bridge supports various Gemini models: - -* ``Model::GEMINI_2_5_PRO`` - Most capable model for complex tasks -* ``Model::GEMINI_2_5_FLASH`` - Fast and efficient for most use cases -* ``Model::GEMINI_2_0_FLASH`` - Previous generation fast model -* ``Model::GEMINI_2_5_FLASH_LITE`` - Lightweight version -* ``Model::GEMINI_2_0_FLASH_LITE`` - Previous generation lightweight model - Model Availability by Location ------------------------------ -.. important:: +.. note:: - **Model availability varies by Google Cloud location.** Not all models are available in all regions. + Model availability varies by Google Cloud location. Not all models are available in all regions. Common model availability: diff --git a/docs/components/store.rst b/docs/components/store.rst index f43f27402..09b21594c 100644 --- a/docs/components/store.rst +++ b/docs/components/store.rst @@ -6,11 +6,9 @@ The Store component provides a low-level abstraction for storing and retrieving Installation ------------ -Install the component using Composer: - .. code-block:: terminal - composer require symfony/ai-store + $ composer require symfony/ai-store Purpose ------- @@ -24,8 +22,8 @@ for documents. Indexing -------- -One higher level feature is the ``Symfony\AI\Store\Indexer``. The purpose of this service is to populate a store with documents. -Therefore it accepts one or multiple ``Symfony\AI\Store\Document\TextDocument`` objects, converts them into embeddings and stores them in the +One higher level feature is the :class:`Symfony\\AI\\Store\\Indexer`. The purpose of this service is to populate a store with documents. +Therefore it accepts one or multiple :class:`Symfony\\AI\\Store\\Document\\TextDocument` objects, converts them into embeddings and stores them in the used vector store:: use Symfony\AI\Store\Document\TextDocument; @@ -73,14 +71,10 @@ Supported Stores * `Postgres`_ (requires `ext-pdo`) * `Qdrant`_ * `SurrealDB`_ -* `Symfony Cache`_ +* `Symfony Cache`_ (requires `symfony/cache` as additional dependency) * `Typesense`_ * `Weaviate`_ -.. note:: - - See `GitHub`_ for planned stores. - Commands -------- @@ -107,12 +101,14 @@ you can use the ``bin/console ai:store:setup`` command to initialize the store a Implementing a Bridge --------------------- -The main extension points of the Store component is the ``Symfony\AI\Store\StoreInterface``, that defines the methods +The main extension points of the Store component is the :class:`Symfony\\AI\\Store\\StoreInterface`, that defines the methods for adding vectorized documents to the store, and querying the store for documents with a vector. This leads to a store implementing two methods:: use Symfony\AI\Store\StoreInterface; + use Symfony\AI\Store\Vector; + use Symfony\AI\Store\VectorDocument; class MyStore implements StoreInterface { @@ -156,6 +152,5 @@ This leads to a store implementing two methods:: .. _`Qdrant`: https://qdrant.tech/ .. _`Neo4j`: https://neo4j.com/ .. _`Typesense`: https://typesense.org/ -.. _`GitHub`: https://github.com/symfony/ai/issues/16 .. _`Symfony Cache`: https://symfony.com/doc/current/components/cache.html .. _`Weaviate`: https://weaviate.io/