-
Notifications
You must be signed in to change notification settings - Fork 163
[Agent Builder] Add page about models #3338
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
1c56d57
b3b02c7
7369a3a
7188818
1353c3e
bf6b427
a0d4ed4
cf6c1e6
5190a0f
f2b4f65
a8143b2
f1466ef
a370f18
dd34cbb
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -12,4 +12,87 @@ These pages are currently hidden from the docs TOC and have `noindexed` meta hea | |
**Go to the docs [landing page](/solutions/search/elastic-agent-builder.md).** | ||
::: | ||
|
||
# Using different models in {{agent-builder}} | ||
# Using different models in {{agent-builder}} | ||
|
||
{{agent-builder}} uses large language models (LLMs) to power agent reasoning and decision-making. By default, agents use the Elastic Managed LLM, but you can configure other models through Kibana connectors. | ||
|
||
## Default model configuration | ||
|
||
By default, {{agent-builder}} uses the Elastic Managed LLM connector running on the [Elastic Inference Service](/explore-analyze/elastic-inference/eis.md) {applies_to}`serverless: preview` {applies_to}`ess: preview 9.2`. | ||
|
||
This managed service requires zero setup and no additional API key management. | ||
|
||
Learn more about the [Elastic Managed LLM connector](kibana://reference/connectors-kibana/elastic-managed-llm.md) and [pricing](https://www.elastic.co/pricing). | ||
|
||
## Change the default model | ||
|
||
By default, {{agent-builder}} uses the Elastic Managed LLM. To use a different model, select a configured connector and set it as the default. | ||
|
||
### Use a pre-configured connector | ||
|
||
1. Search for **GenAI Settings** in the global search field | ||
2. Select your preferred connector from the **Default AI Connector** dropdown | ||
3. Save your changes | ||
|
||
### Create a new connector in the UI | ||
|
||
1. Find connectors under **Alerts and Insights / Connectors** in the [global search bar](/explore-analyze/find-and-organize/find-apps-and-objects.md) | ||
2. Select **Create Connector** and select your model provider | ||
3. Configure the connector with your API credentials and preferred model | ||
4. Search for **GenAI Settings** in the global search field | ||
5. Select your new connector from the **Default AI Connector** dropdown under **Custom connectors** | ||
6. Save your changes | ||
|
||
For detailed instructions on creating connectors, refer to [Connectors](https://www.elastic.co/docs/deploy-manage/manage-connectors). | ||
|
||
Learn more about [preconfigured connectors](https://www.elastic.co/docs/reference/kibana/connectors-kibana/pre-configured-connectors). | ||
|
||
#### Connect a local LLM | ||
|
||
You can connect a locally hosted LLM to Elastic using the OpenAI connector. This requires your local LLM to be compatible with the OpenAI API format. | ||
|
||
Refer to the [OpenAI connector documentation](kibana://reference/connectors-kibana/openai-action-type.md) for detailed setup instructions. | ||
|
||
## Connectors API | ||
|
||
For programmatic access to connector management, refer to the [Connectors API documentation]({{kib-serverless-apis}}group/endpoint-connectors). | ||
|
||
## Recommended models | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. How about we combine this section with the subsequent one? The opening line can be the one in this section followed by the list of the families. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. absolutely, no need for both headings 👍 |
||
|
||
{{agent-builder}} requires models with strong reasoning and tool-calling capabilities. State-of-the-art models perform significantly better than smaller or older models. | ||
|
||
The following models are known to work well with {{agent-builder}}: | ||
leemthompo marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
- **OpenAI**: GPT-4.1, GPT-4o | ||
- **Anthropic**: Claude Sonnet 4.5, Claude Sonnet 4, Claude Sonnet 3.7 | ||
- **Google**: Gemini 2.5 Pro | ||
|
||
### Why model quality matters | ||
|
||
Agent Builder relies on advanced LLM capabilities including: | ||
|
||
- **Function calling**: Models must accurately select appropriate tools and construct valid parameters from natural language requests | ||
- **Multi-step reasoning**: Agents need to plan, execute, and adapt based on tool results across multiple iterations | ||
- **Structured output**: Models must produce properly formatted responses that the agent framework can parse | ||
|
||
Smaller or less capable models may produce errors like: | ||
|
||
```console-response | ||
Error: Invalid function call syntax | ||
``` | ||
|
||
```console-response | ||
Error executing agent: No tool calls found in the response. | ||
``` | ||
|
||
While any chat-completion-compatible connector can technically be configured, we strongly recommend using state-of-the-art models for reliable agent performance. | ||
|
||
:::{note} | ||
GPT-4o-mini and similar smaller models are not recommended for {{agent-builder}} as they lack the necessary capabilities for reliable agent workflows. | ||
::: | ||
|
||
## Related resources | ||
|
||
- [Limitations and known issues](limitations-known-issues.md): Current limitations around model selection | ||
- [Get started](get-started.md): Initial setup and configuration | ||
- [Connectors](/deploy-manage/manage-connectors.md): Detailed connector configuration guide |
Uh oh!
There was an error while loading. Please reload this page.