Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 12 additions & 0 deletions docs/ai-agents/build-an-ai-agent.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,10 @@ Click on the "New AI Agent" button and fill the form with the agent details.

We recommend following the steps below.

:::info MCP Server Backend Mode
AI agents can be enhanced with MCP server backend mode for expanded capabilities including intelligent catalog access and Claude model processing. This is controlled when [interacting with the agents](/ai-agents/interact-with-ai-agents) through widgets and API calls, not in the agent configuration itself.
:::

### Step 1: Define your agent's purpose

The first step in building an AI agent is deciding on its purpose.
Expand All @@ -51,6 +55,10 @@ For example:

Pay attention to relationships between entities to ensure your agent can provide comprehensive answers.

:::tip Enhanced access with MCP server backend
When using [MCP server backend mode](/ai-agents/interact-with-ai-agents) during interactions, the agent can intelligently access your entire catalog regardless of configured blueprints, providing more comprehensive answers.
:::

### Step 3: Configure actions (optional)

If your agent needs to run actions, you will need to:
Expand Down Expand Up @@ -234,6 +242,10 @@ AI agents in Port can search, group, and index entities in your Port instance. H
- **Permission model**:
- Interaction with the AI agent is based on your user permissions.
- Sequential automations run as Admin.

:::info Enhanced capabilities with MCP server backend
When using [MCP server backend mode](/ai-agents/interact-with-ai-agents) during interactions, many of these limitations are reduced as the agent gains access to enhanced tools and broader data access capabilities.
:::
</details>

<details>
Expand Down
68 changes: 66 additions & 2 deletions docs/ai-agents/interact-with-ai-agents.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,17 @@ import TabItem from "@theme/TabItem"

Once you've built your AI agents, it's time to interact with them. Port provides several ways to communicate with your AI agents.

## Backend mode selection

When interacting with AI agents, you can choose between two backend modes that determine the agent's capabilities:

- **Standard backend**: Uses the agent's configured blueprint access and OpenAI GPT models
- **MCP server backend**: Provides enhanced capabilities with intelligent catalog access and Claude models

:::tip Backend mode is interaction-level
The backend mode is controlled when you interact with agents (through widgets, API calls, etc.), not in the agent configuration itself. This means any agent can benefit from MCP server backend capabilities when you choose to use them.
:::

## Interaction options

You have two main approaches when interacting with AI agents in Port:
Expand Down Expand Up @@ -50,6 +61,18 @@ Follow these steps to add an AI agent widget:

The widget provides a chat interface where you can ask questions and receive responses from the **specific agent you configured** without leaving your dashboard.

### MCP server backend mode in widgets

When adding an AI agent widget to your dashboard, you can configure whether to use the MCP server backend mode. During widget configuration, you'll see a "Use MCP" toggle option:

<img src='/img/ai-agents/AIAgentsMCPWidgetConfig.png' width='70%' border='1px' />

When MCP server backend mode is enabled, the widget interface provides enhanced capabilities and visual indicators showing which tools are being used:

<img src='/img/ai-agents/AIAgentsMCPWidgetUI.png' width='80%' border='1px' />

This gives you transparency into the enhanced processing and shows you exactly which MCP server tools the agent is leveraging to answer your questions.

</TabItem>
<TabItem value="slack-integration" label="Slack Integration">

Expand All @@ -75,6 +98,10 @@ When you send a message, the app will:
- Limit threads to five consecutive messages for optimal performance.
- For best results, start new threads for new topics or questions.

:::info MCP server backend mode in Slack
Currently, Slack interactions use the agent's default backend mode configuration. The ability to choose or override the backend mode per interaction is not yet available in Slack, but will be added in future updates.
:::

</TabItem>
<TabItem value="actions-automations" label="Actions and automations">

Expand Down Expand Up @@ -116,6 +143,24 @@ curl 'https://api.port.io/v1/agent/<AGENT_IDENTIFIER>/invoke?stream=true' \\
--data-raw '{"prompt":"What is my next task?"}'
```

**Using MCP Server Backend Mode via API:**

You can override the agent's default backend mode by adding the `use_mcp` parameter:

```bash
# Force MCP server backend mode
curl 'https://api.port.io/v1/agent/<AGENT_IDENTIFIER>/invoke?stream=true&use_mcp=true' \\
-H 'Authorization: Bearer <YOUR_API_TOKEN>' \\
-H 'Content-Type: application/json' \\
--data-raw '{"prompt":"What is my next task?"}'

# Force standard backend mode
curl 'https://api.port.io/v1/agent/<AGENT_IDENTIFIER>/invoke?stream=true&use_mcp=false' \\
-H 'Authorization: Bearer <YOUR_API_TOKEN>' \\
-H 'Content-Type: application/json' \\
--data-raw '{"prompt":"What is my next task?"}'
```

**Streaming Response Details (Server-Sent Events):**

The API will respond with `Content-Type: text/event-stream; charset=utf-8`.
Expand Down Expand Up @@ -259,7 +304,18 @@ The plan shows how the agent decided to tackle your request and the steps it int

### Tools used

This section displays the actual steps the agent took and the APIs it used to complete your request. It can be particularly helpful for debugging when answers don't meet expectations, such as when an agent:
This section displays the actual steps the agent took and the APIs it used to complete your request. The tools shown depend on the backend mode used:

**Standard Backend Mode:**
- Shows traditional agent tools based on configured blueprint access
- Limited to predefined search and query capabilities

**MCP Server Backend Mode:**
- Shows enhanced MCP server tools for comprehensive data access
- Includes all read-only tools available in the MCP server
- Provides more detailed tool execution information

This information can be particularly helpful for debugging when answers don't meet expectations, such as when an agent:

- Used an incorrect field name.
- Chose an inappropriate property.
Expand Down Expand Up @@ -308,7 +364,15 @@ Here are some common errors you might encounter when working with AI agents and
This error occurs when an AI agent tries to execute a self-service action that requires selecting entities from specific blueprints, but the agent doesn't have access to those blueprints.

**How to fix:**
Add the missing blueprints listed in the error message to the agent's configuration.

For **Standard Backend Mode:**
- Add the missing blueprints listed in the error message to the agent's configuration.

For **MCP Server Backend Mode:**
- This error is less common since MCP mode has broader data access
- If you encounter this error, it likely relates to action execution requirements
- Ensure the action's entity selection fields are properly configured
- Consider switching to MCP server backend mode for enhanced blueprint access
</details>

## Security considerations
Expand Down
23 changes: 22 additions & 1 deletion docs/ai-agents/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,21 @@ AI agents serve two primary functions:

2. **Assist with actions** by helping developers complete common tasks faster. Agents can suggest and pre-fill forms, guide developers through workflows, and provide relevant context for decision-making. You can decide whether they can run an action or require human approval.

## Enhanced capabilities with MCP server backend

:::tip New capability
Port AI agents now support an enhanced **MCP server backend mode** that provides significantly expanded capabilities. This is a new feature that enhances your existing agents - you can enable it for any agent to unlock these advanced capabilities.
:::

When using the MCP server backend mode, your AI agents gain:

- **Expanded data access**: Intelligently queries your entire catalog without blueprint restrictions
- **Enhanced reasoning**: Powered by Claude models for improved analysis and decision-making
- **Broader tool access**: Uses all read-only tools available in the MCP server for comprehensive insights
- **Smarter action selection**: Still respects your configured allowed actions while providing better context

Your existing agents can immediately benefit from these enhancements by enabling the MCP server backend mode when [interacting with them](/ai-agents/interact-with-ai-agents) through widgets and API calls.

### Example use cases

**Questions your agents can answer:**
Expand Down Expand Up @@ -57,6 +72,7 @@ To start working with AI agents, follow these steps:
- Determine what actions your agents can assist with.
- Set permissions for who can use specific agents.
- Configure how agents integrate with your workflows.
- Choose between standard and MCP server backend modes when [interacting with agents](/ai-agents/interact-with-ai-agents).

## Security and data handling

Expand Down Expand Up @@ -151,7 +167,12 @@ We limit this data storage strictly to these purposes. You can contact us to opt
<details>
<summary>Which LLM models are you using? (Click to expand)</summary>

We aim to use the best models that will yield the best results while keeping your data safe; at the moment, we work with Open AI's GPT models, but this could change in the future.
We use different models depending on the backend mode:

- **Standard backend**: OpenAI's GPT models for reliable performance and broad compatibility
- **MCP server backend**: Claude models for enhanced reasoning and analysis capabilities

We aim to use the best models that will yield the best results while keeping your data safe. Model selection may evolve as we continue to optimize agent performance.
</details>

<details>
Expand Down
Loading