diff --git a/docs/ai-agents/port-mcp-server.md b/docs/ai-agents/port-mcp-server.md
deleted file mode 100644
index ba90551537..0000000000
--- a/docs/ai-agents/port-mcp-server.md
+++ /dev/null
@@ -1,626 +0,0 @@
----
-sidebar_position: 4
-title: Port MCP Server
----
-
-import Tabs from "@theme/Tabs"
-import TabItem from "@theme/TabItem"
-
-# Port MCP server
-
-
-
-
-
-
-
-
-The Port Model Context Protocol (MCP) Server acts as a bridge, enabling Large Language Models (LLMs)—like those powering Claude, Cursor, or GitHub Copilot—to interact directly with your Port.io developer portal. This allows you to leverage natural language to query your software catalog, analyze service health, manage resources, and even streamline development workflows, all from your preferred interfaces.
-
-:::info AI Agents vs. MCP Server
-The Port MCP Server is currently in open beta and provides significant standalone value, independent of our [AI Agents feature](/ai-agents/overview). Port AI Agents are currently in closed beta with limited access, while the MCP Server gives you immediate access to streamline building in Port, query your catalog, analyze service health, and streamline development workflows using natural language.
-
-While the MCP Server can interact with Port AI Agents when available, the core MCP functionality can be used freely without requiring access to the closed beta AI Agents feature.
-:::
-
-## Why integrate LLMs with your developer portal?
-
-The primary advantage of the Port MCP Server is the ability to bring your developer portal's data and actions into the conversational interfaces you already use. This offers several benefits:
-
-* **Reduced Context Switching:** Access Port information and initiate actions without leaving your IDE or chat tool.
-* **Increased Efficiency:** Get answers and perform tasks faster using natural language commands.
-* **Improved Developer Experience:** Make your developer portal more accessible and intuitive to interact with.
-* **Enhanced Data-Driven Decisions:** Easily pull specific data points from Port to inform your work in real-time.
-
-As one user put it:
-
-> "It would be interesting to build a use case where a developer could ask Copilot from his IDE about stuff Port knows about, without actually having to go to Port."
-
-The Port MCP Server directly enables these kinds of valuable, in-context interactions.
-
-## Key capabilities and use-cases
-
-
-
-
-
-
-
-The Port MCP Server enables you to interact with your Port data and capabilities directly through natural language within your chosen LLM-powered tools. Here's what you can achieve:
-
-### Find information quickly
-
-Effortlessly query your software catalog and get immediate answers. This eliminates the need to navigate through UIs or write complex API queries when you need information.
-
-* Ask: "Who is the owner of service X?"
-* Ask: "How many services do we have in production?"
-* Ask: "Show me all the microservices owned by the Backend team."
-* Ask: "What are the dependencies of the 'OrderProcessing' service?"
-
-
-
-### Vibe-build in Port
-
-Leverage Claude's capabilities to manage and build your entire Port software catalog. You can create and configure blueprints, set up self-service actions, design scorecards, and more.
-
-* Ask: "Please help me apply this guide into my Port instance - [[guide URL]]"
-* Ask: "I want to start managing my k8s deployments, how can we build it in Port?"
-* Ask: "I want a new production readiness scorecard to track the code quality and service alerts"
-* Ask: "Create a new self-service action in Port to scaffold a new service"
-
-
-
-### Analyze scorecards and quality
-
-Gain insights into service health, compliance, and quality by leveraging Port's scorecard data. Identify areas for improvement and track progress against your standards.
-
-* Ask: "Which services are failing our security requirements scorecard?"
-* Ask: "What's preventing the 'InventoryService' from reaching Gold level in the 'Production Readiness' scorecard?"
-* Ask: "Show me the bug count vs. test coverage for all Java microservices."
-
-
-
-* Ask: "Which of our services are missing critical monitoring dashboards?"
-
-
-
-### Streamline development and operations
-
-Receive assistance with common development and operational tasks, directly within your workflow.
-
-* Ask: "What do I need to do to set up a new 'ReportingService'?"
-* Ask: "Guide me through creating a new component blueprint with 'name', 'description', and 'owner' properties."
-* Ask: "Help me add a rule to the 'Tier1Services' scorecard that requires an on-call schedule to be defined."
-
-
-
-### Find your own use cases
-
-You can use Port's MCP to find the use cases that will be valuable to you. Try using this prompt: "think of creative prompts I can use to showcase the power of Port's MCP, based on the data available in Port"
-
-
-## Using Port MCP
-
-### Setup
-
-Setting up Port's MCP is simple. Follow the instructions for your preferred tool, or learn about the archived local MCP server.
-
-
-
-To connect Cursor to Port's remote MCP, follow these steps:
-
-1. **Go to Cursor settings, click on Tools & Integrations, and add a new MCP server**
-
-
-
-2. **Add the above configuration**
-
-Use the appropriate configuration for your region:
-
-
-
-```json showLineNumbers
-{
- "mcpServers": {
- "port-eu": {
- "url": "https://mcp.port.io/v1"
- }
- }
-}
-```
-
-
-```json showLineNumbers
-{
- "mcpServers": {
- "port-us": {
- "url": "https://mcp.us.port.io/v1"
- }
- }
-}
-```
-
-
-
-
-
-3. **Login to Port**
-Click on "Needs login", and complete the authentication flow in the window that opens up.
-
-
-4. **See the MCP tools**
-After successfully connecting to Port, you'll see the list of available tools from the MCP.
-
-
-:::warning Authentication window behavior
-In some cases, after clicking "Accept" in the authentication popup, the window won't get closed but the connection is established successfully. You can safely close the window.
-
-If you still don't see the tool, try it a couple of times. We are aware of this behavior and working to improve it.
-:::
-
-
-
-To connect VSCode to Port's remote MCP server, follow these detailed steps. For complete instructions, refer to the [official VS Code MCP documentation](https://code.visualstudio.com/docs/copilot/chat/mcp-servers).
-
-:::info VSCode MCP requirements
-Before proceeding, ensure your VS Code is updated to the latest version and that MCP is enabled for your GitHub organization. You may need to enable "Editor preview features" under Settings > Code, planning, and automation > Copilot via admin access from your organization.
-:::
-
-:::tip Prerequisites
-This configuration uses the open-source `mcp-remote` package, which requires Node.js to be installed on your system. Before using the configuration, ensure Node.js is available by running:
-
-```bash
-npx -y mcp-remote --help
-```
-
-If you encounter errors:
-- **Missing Node.js**: Install Node.js from [nodejs.org](https://nodejs.org/)
-- **Network issues**: Check your internet connection and proxy settings
-- **Permission issues**: You may need to run with appropriate permissions
-:::
-
-
-**Step 1: Configure MCP Server Settings**
-
-1. Open VS Code settings
-2. Search for "MCP: Open user configuration" (or follow the instructions on a workspace installation)
-3. Add the server configuration using the appropriate configuration for your region:
-
-
-
-```json showLineNumbers
-{
- "mcpServers": {
- "port-vscode-eu": {
- "command": "npx",
- "args": [
- "-y",
- "mcp-remote",
- "https://mcp.port.io/v1"
- ]
- }
- }
-}
-```
-
-
-```json showLineNumbers
-{
- "mcpServers": {
- "port-vscode-us": {
- "command": "npx",
- "args": [
- "-y",
- "mcp-remote",
- "https://mcp.us.port.io/v1"
- ]
- }
- }
-}
-```
-
-
-
-**Step 2: Start the MCP Server**
-
-1. After adding the configuration, click on "Start" to initialize the MCP server
-2. If you don't see the "Start" button, ensure:
- - Your VS Code version is updated to the latest version
- - MCP is enabled for your GitHub organization
- - "Editor preview features" is enabled under Settings > Code, planning, and automation > Copilot
-
-**Step 3: Verify Connection**
-
-1. Once started, you should see the number of available tools displayed
-2. If you don't see the tools count:
- - Click on "More" to expand additional options
- - Select "Show output" to view detailed logs
- - Check the output panel for any error messages or connection issues
-
-**Step 4: Access Port Tools**
-
-1. Start a new chat session in VS Code
-2. Click on the tools icon in the chat interface
-3. You should now see Port tools available for use
-
-
-
-
-
-To connect Claude to Port's remote MCP, you need to create a custom connector. This process does not require a client ID. For detailed instructions, refer to the [official Anthropic documentation on custom connectors](https://support.anthropic.com/en/articles/11175166-getting-started-with-custom-connectors-using-remote-mcp).
-
-When prompted for the remote MCP server URL, use the appropriate URL for your region:
-
-
-
-```
-https://mcp.port.io/v1
-```
-
-
-```
-https://mcp.us.port.io/v1
-```
-
-
-
-
-The local MCP server is an open-source project that you can run on your own infrastructure. It offers a similar set of capabilities, but requires manual setup and maintenance.
-
-While you can use it, we are no longer supporting it and are not tracking the project issues and activities.
-
-
Prerequisites
-
-- A Port.io account with appropriate permissions.
-- Your Port credentials (Client ID and Client Secret). You can create these from your Port.io dashboard under Settings > Credentials.
-
-
Installation
-
-The Port MCP Server can be installed using Docker or `uvx` (a package manager for Python). While the setup is straightforward, the specifics can vary based on your chosen MCP client (Claude, Cursor, VS Code, etc.).
-
-:::info Detailed Installation Guide
-For comprehensive, step-by-step installation instructions for various platforms and methods (Docker, UVX), please refer to the **[Port MCP Server GitHub README](https://github.com/port-labs/port-mcp-server)**.
-The README provides the latest configuration details and examples for different setups.
-:::
-
-
-
-### Available tools
-
-The Port MCP Server exposes different sets of tools based on your role and use case. The tools you see will depend on your permissions in Port.
-
-
-
-
-**Developers** are typically users who consume and interact with the developer portal - querying services, running actions, and analyzing data. These tools help you get information and execute approved workflows.
-
-**Query and analysis tools**
-- **[`get_blueprints`](/api-reference/get-all-blueprints)**: Retrieve a list of all blueprints from Port.
-- **[`get_blueprint`](/api-reference/get-a-blueprint)**: Retrieve information about a specific blueprint by its identifier.
-- **[`get_entities`](/api-reference/get-all-entities-of-a-blueprint)**: Retrieve all entities for a given blueprint.
-- **[`get_entity`](/api-reference/get-an-entity)**: Retrieve information about a specific entity.
-- **[`get_scorecards`](/api-reference/get-all-scorecards)**: Retrieve all scorecards from Port.
-- **[`get_scorecard`](/api-reference/get-a-scorecard)**: Retrieve information about a specific scorecard by its identifier.
-- **[`describe_user_details`](/api-reference/get-organization-details)**: Get information about your Port account, organization, and user profile details.
-- **`search_port_docs_sources`**: Search through Port documentation sources for relevant information.
-- **`ask_port_docs`**: Ask questions about Port documentation and get contextual answers.
-
-**Action execution tools**
-- **[`run_`](/api-reference/execute-a-self-service-action)**: Execute any action you have permission to run in Port. The `action_identifier` corresponds to the identifier of the action you want to run. For example, if you have an action with the identifier `scaffold_microservice`, you can run it using `run_scaffold_microservice`.
-
-**AI agent tools**
-- **[`invoke_ai_agent`](/api-reference/invoke-an-agent)**: Invoke a Port AI agent with a specific prompt.
-
-
-
-
-**Builders** are typically platform engineers or admins who design and configure the developer portal - creating blueprints, setting up scorecards, and managing the overall structure. These tools help you build and maintain the portal.
-
-**All Developer tools**
-Builders have access to all the tools available to Developers (listed in the Developer tab), plus the additional management tools below.
-
-**Blueprint management tools**
-- **[`create_blueprint`](/api-reference/create-a-blueprint)**: Create a new blueprint in Port.
-- **[`update_blueprint`](/api-reference/update-a-blueprint)**: Update an existing blueprint.
-- **[`delete_blueprint`](/api-reference/delete-a-blueprint)**: Delete a blueprint from Port.
-
-**Entity management tools**
-- **[`create_entity`](/api-reference/create-an-entity)**: Create a new entity for a specific blueprint.
-- **[`update_entity`](/api-reference/update-an-entity)**: Update an existing entity.
-- **[`delete_entity`](/api-reference/delete-an-entity)**: Delete an entity.
-
-**Scorecard management tools**
-- **[`create_scorecard`](/api-reference/create-a-scorecard)**: Create a new scorecard for a specific blueprint.
-- **[`update_scorecard`](/api-reference/change-scorecards)**: Update an existing scorecard.
-- **[`delete_scorecard`](/api-reference/delete-a-scorecard)**: Delete a scorecard from Port.
-
-
-
-
-### Select which tools to use
-
-By default, when you open a chat with Port MCP, all available tools (based on your permissions) are loaded and ready to use. However, you can customize which tools are available if you want to focus on specific workflows.
-
-For example, if you only want to query data from Port without building or modifying anything, you can limit the tools to just the read-only ones. This can help reduce complexity and ensure you don't accidentally make changes.
-
-
-
-
-In Cursor, you can customize which tools are available through the UI after connecting to Port MCP. Once connected, you can select specific tools through Cursor's interface as shown below.
-
-
-
-
-
-
-In VSCode, you can customize which tools are available through the UI after connecting to Port MCP. Once connected, you can select specific tools through VSCode's interface as shown below.
-
-
-
-
-
-
-In Claude, you can specify which tools to enable during the custom connector setup process. You'll have the option to select specific tools through Claude's interface rather than enabling all available tools.
-
-
-
-Refer to the [Claude custom connector documentation](https://support.anthropic.com/en/articles/11175166-getting-started-with-custom-connectors-using-remote-mcp) for detailed instructions on tool selection during setup.
-
-
-
-
-### Prompts
-
-In Port, you can centrally manage reusable prompts and expose them to your users via the MCP Server. Once defined in Port, these prompts become available in supported MCP clients (for example, Cursor or Claude) where developers and AI agents can discover and run them with the required inputs.
-
-#### Common use cases
-
-- Automate on-call runbooks and incident triage guidance
-- Standardize code review or deployment checklists
-- Generate structured updates and communications (e.g., incident status, release notes)
-
-#### Setup data model
-
-1. Go to the [Builder page](https://app.getport.io/settings/data-model) of your portal.
-
-2. Click on "+ Blueprint".
-
-3. Click on the `{...}` button in the top right corner, and choose "Edit JSON".
-
-4. Paste the following JSON schema into the editor:
-
-
-
-
- Prompt blueprint JSON (click to expand)
-
- ```json showLineNumbers
- {
- "identifier": "prompt",
- "title": "Prompt",
- "icon": "Microservice",
- "schema": {
- "properties": {
- "description": {
- "type": "string",
- "title": "Description"
- },
- "arguments": {
- "items": {
- "type": "object",
- "properties": {
- "name": {
- "type": "string",
- "description": "The name of the argument parameter"
- },
- "description": {
- "type": "string",
- "description": "A description of what this argument is for"
- },
- "required": {
- "type": "boolean",
- "description": "Whether this argument is required or optional",
- "default": false
- }
- },
- "required": [
- "name",
- "description"
- ]
- },
- "type": "array",
- "title": "Arguments"
- },
- "template": {
- "icon": "DefaultProperty",
- "type": "string",
- "title": "Prompt Template",
- "format": "markdown"
- }
- },
- "required": [
- "description",
- "template"
- ]
- },
- "mirrorProperties": {},
- "calculationProperties": {},
- "aggregationProperties": {},
- "relations": {}
- }
- ```
-
-
-:::info Where prompts appear
-Once this blueprint exists and you create entities for it, prompts will show up in supported MCP clients connected to your Port organization. In clients that surface MCP prompts, you’ll see them listed and ready to run with arguments.
-:::
-
-#### Create prompts
-
-Create entities of the `prompt` blueprint for each prompt you want to expose. At minimum, provide `description` and `template`. Optionally add `arguments` to parameterize the prompt.
-
-1. Go to the [Prompts page](https://app.getport.io/prompts) in your portal.
-2. Click `Create prompt`.
-3. Fill out the form:
- - Provide a title and description.
- - Write the prompt template (supports markdown).
- - Define any `arguments` (optional) with `name`, `description`, and whether they are `required`.
-
-
-
-:::info Template and placeholders
-The `template` supports markdown and variable placeholders. Each argument defined in `arguments` is exposed by its `name` and can be referenced as `{{name}}` inside the template. When you run the prompt, the MCP Server collects values for required arguments and substitutes them into the matching `{{}}` placeholders before execution.
-:::
-
-#### Examples
-
-
-
-
-Use placeholders to inject context such as the service, environment, incident, and timeframe.
-
-```markdown showLineNumbers
-You are assisting with an incident in the {{service_name}} service ({{environment}}).
-Incident ID: {{incident_id}}
-
-For the last {{timeframe}}:
-- Summarize critical alerts and recent deploys
-- Suggest next steps and owners
-- Link relevant dashboards/runbooks
-```
-
-Arguments to define: `service_name` (required), `environment` (optional), `incident_id` (required), `timeframe` (optional).
-
-
-
-
-Generate tailored remediation steps for failing scorecard rules.
-
-```markdown showLineNumbers
-For {{service_name}}, generate remediation steps for failing rules in the "{{scorecard_name}}" scorecard.
-
-For each failing rule:
-- What is failing
-- Why it matters
-- Step-by-step remediation
-- Owners and suggested timeline
-```
-
-Arguments to define: `service_name` (required), `scorecard_name` (required).
-
-
-
-
-Summarize on-call context for a team over a time window.
-
-```markdown showLineNumbers
-Create an on-call handoff for {{team}} for the last {{timeframe}}.
-
-Include:
-- Active incidents and current status
-- Top risks and mitigations
-- Pending actions and owners
-- Upcoming maintenance windows
-```
-
-Arguments to define: `team` (required), `timeframe` (required).
-
-
-
-
-After creating entities, reconnect or refresh your MCP client; your prompts will be available to run and will prompt for any defined arguments.
-
-#### See prompts in your client
-
-
-
-
-In Cursor, type "/" to open the prompts list. You'll see all `prompt` entities; selecting one opens an input form for its arguments.
-
-
-
-When you select a prompt, Cursor renders fields for the defined `arguments`. Required ones are marked and must be provided. The MCP Server substitutes provided values into the matching `{{}}` placeholders in the template at runtime.
-
-
-
-
-
-
-In Claude, click the "+" button and choose the prompts option to view the list from your Port organization. Selecting a prompt opens a parameter collection flow.
-
-
-
-Claude will ask for any required arguments before running the prompt, and the MCP Server will replace the corresponding `{{}}` placeholders in the template with the provided values.
-
-
-
-
-
-
-## Troubleshooting
-
-If you encounter issues while setting up or using the Port MCP Server, expand the relevant section below:
-
-
-How can I connect to the MCP? (Click to expand)
-
-Refer back to the [setup instructions](/ai-agents/port-mcp-server#setup) for your specific application (Cursor, VSCode, or Claude). Make sure you're using the correct regional URL for your Port organization.
-
-
-
-
-I completed the connection but nothing happens (Click to expand)
-
-Check that you've followed all the [setup steps](/ai-agents/port-mcp-server#setup) correctly for your application. Ensure you're authenticated with Port and have the necessary permissions. If you've followed all the steps and still have issues, please reach out to our support team.
-
-
-
-
-How can I use the MCP server? (Click to expand)
-
-Once connected, you can interact with Port through natural language in your application's chat interface. Ask questions about your software catalog, request help with building Port resources, or analyze your data. The [available tools](/ai-agents/port-mcp-server#available-tools) depend on your permissions (Developer vs Builder role).
-
-
-
-
-Why do I see an error about too many tools? (Click to expand)
-
-Each self-service action in your Port instance becomes an individual tool (as `run_`). If your organization has many actions, this can result in a large number of tools being available.
-
-While most AI models handle this well, some have restrictions and may limit you to around 40 tools total. If you encounter errors about tool limits:
-
-1. **Reduce the number of tools** by customizing which tools are enabled (see [Select which tools to use](#select-which-tools-to-use) section above)
-2. **Focus on essential tools** by only enabling the read-only tools you need plus a few key actions
-3. **Contact your Port Admin** to review which actions are essential for your workflow
-
-This is completely normal behavior and doesn't indicate a problem with Port MCP - it's just a limitation of some AI models.
-
-
-
-:::tip Getting Help
-If you continue to experience issues, please reach out to Port support with:
-- Your IDE/application version
-- The specific error messages you're seeing
-- Your Port region (EU/US)
-- Steps you've already tried
-
-This information will help us provide more targeted assistance.
-:::
diff --git a/docs/ai-agents/_category_.json b/docs/ai-interfaces/_category_.json
similarity index 68%
rename from docs/ai-agents/_category_.json
rename to docs/ai-interfaces/_category_.json
index b20ead97ae..93807d895d 100644
--- a/docs/ai-agents/_category_.json
+++ b/docs/ai-interfaces/_category_.json
@@ -1,5 +1,5 @@
{
- "label": "AI agents & MCP server",
+ "label": "AI interfaces",
"position": 9,
"className": "custom-sidebar-item sidebar-menu-ai-agents"
}
\ No newline at end of file
diff --git a/docs/ai-interfaces/ai-agents/_category_.json b/docs/ai-interfaces/ai-agents/_category_.json
new file mode 100644
index 0000000000..b16a778c8f
--- /dev/null
+++ b/docs/ai-interfaces/ai-agents/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "AI agents",
+ "position": 1
+}
diff --git a/docs/ai-agents/build-an-ai-agent.md b/docs/ai-interfaces/ai-agents/build-an-ai-agent.md
similarity index 89%
rename from docs/ai-agents/build-an-ai-agent.md
rename to docs/ai-interfaces/ai-agents/build-an-ai-agent.md
index 094375bfe3..409cd9600f 100644
--- a/docs/ai-agents/build-an-ai-agent.md
+++ b/docs/ai-interfaces/ai-agents/build-an-ai-agent.md
@@ -18,7 +18,7 @@ Let's walk through the process of creating an agent that can assist your develop
## Create a new AI agent
-To create a new agent, head to the AI Agents catalog page (this page will be created for you when you [activate the feature](/ai-agents/overview#access-to-the-feature)).
+To create a new agent, head to the AI Agents catalog page (this page will be created for you when you [activate the feature](/ai-interfaces/ai-agents/overview#access-to-the-feature)).
Click on the "New AI Agent" button and fill the form with the agent details.
@@ -27,7 +27,7 @@ Click on the "New AI Agent" button and fill the form with the agent details.
We recommend following the steps below.
:::info MCP Server Backend Mode
-AI agents can be enhanced with MCP server backend mode for expanded capabilities including intelligent catalog access and Claude model processing. This is controlled when [interacting with the agents](/ai-agents/interact-with-ai-agents) through widgets and API calls, not in the agent configuration itself.
+AI agents can be enhanced with MCP server backend mode for expanded capabilities including intelligent catalog access and Claude model processing. This is controlled when [interacting with the agents](/ai-interfaces/ai-agents/interact-with-ai-agents) through widgets and API calls, not in the agent configuration itself.
:::
### Step 1: Define your agent's purpose
@@ -56,7 +56,7 @@ For example:
Pay attention to relationships between entities to ensure your agent can provide comprehensive answers.
:::tip Enhanced access with MCP server backend
-When using [MCP server backend mode](/ai-agents/interact-with-ai-agents) during interactions, the agent can intelligently access your entire catalog regardless of configured blueprints, providing more comprehensive answers.
+When using [MCP server backend mode](/ai-interfaces/ai-agents/interact-with-ai-agents) during interactions, the agent can intelligently access your entire catalog regardless of configured blueprints, providing more comprehensive answers.
:::
### Step 3: Configure actions (optional)
@@ -155,7 +155,7 @@ Choose conversation starters that:
When you feel your agent is ready:
1. Set its status to "Active".
-2. Start interacting with it through the [available interfaces](/ai-agents/interact-with-ai-agents).
+2. Start interacting with it through the [available interfaces](/ai-interfaces/ai-agents/interact-with-ai-agents).
## Evaluating your agent performance
@@ -166,7 +166,7 @@ Continuous evaluation and improvement are essential for maintaining effective AI
3. **Analyze execution plans**: Examine how the agent processes requests by reviewing the execution plan and tool calls for specific invocations. This helps identify where improvements are needed.
4. **Refine the prompt**: Update your agent's prompt based on your findings to address common issues.
-For more details on how to view execution plans and analyze agent behavior, see [Interact with AI agents](/ai-agents/interact-with-ai-agents).
+For more details on how to view execution plans and analyze agent behavior, see [Interact with AI agents](/ai-interfaces/ai-agents/interact-with-ai-agents).
## Examples
@@ -194,7 +194,7 @@ Your goal is to help developers initiate and track deployments to various enviro
## Formatting the agent response
To format the agent's response, you can specify the desired format in its prompt. For optimal results when using the UI, it's recommended to request a markdown format response.
This allows for better presentation and readability of the information provided by the agent.
-When sending messages through Slack, our [Slack app](/ai-agents/slack-app) convert the markdown format into a Slack compatible formatting.
+When sending messages through Slack, our [Slack app](/ai-interfaces/ai-agents/slack-app) convert the markdown format into a Slack compatible formatting.
### Example of a Markdown Response
```markdown
@@ -208,7 +208,7 @@ From [john-123](https://github.com/john-123)
I don't see an option to add an AI agent (Click to expand)
-Make sure you have [access to the AI agents feature](/ai-agents/overview#access-to-the-feature). Note that it's currently in closed beta and requires special access. If you believe you should have access, please contact our support.
+Make sure you have [access to the AI agents feature](/ai-interfaces/ai-agents/overview#access-to-the-feature). Note that it's currently in closed beta and requires special access. If you believe you should have access, please contact our support.
@@ -244,7 +244,7 @@ AI agents in Port can search, group, and index entities in your Port instance. H
- Sequential automations run as Admin.
:::info Enhanced capabilities with MCP server backend
-When using [MCP server backend mode](/ai-agents/interact-with-ai-agents) during interactions, many of these limitations are reduced as the agent gains access to enhanced tools and broader data access capabilities.
+When using [MCP server backend mode](/ai-interfaces/ai-agents/interact-with-ai-agents) during interactions, many of these limitations are reduced as the agent gains access to enhanced tools and broader data access capabilities.
:::
@@ -257,4 +257,4 @@ When configuring your agent's actions, make sure you select the "approval" optio
## Security considerations
AI agents in Port are designed with security and privacy as a priority.
-For more information on security and data handling, see our [AI agents overview](/ai-agents/overview#security-and-data-handling).
\ No newline at end of file
+For more information on security and data handling, see our [AI agents overview](/ai-interfaces/ai-agents/overview#security-and-data-handling).
\ No newline at end of file
diff --git a/docs/ai-agents/interact-with-ai-agents.md b/docs/ai-interfaces/ai-agents/interact-with-ai-agents.md
similarity index 97%
rename from docs/ai-agents/interact-with-ai-agents.md
rename to docs/ai-interfaces/ai-agents/interact-with-ai-agents.md
index 137da0ce12..8bbaf3caa9 100644
--- a/docs/ai-agents/interact-with-ai-agents.md
+++ b/docs/ai-interfaces/ai-agents/interact-with-ai-agents.md
@@ -80,7 +80,7 @@ The Slack integration provides the most natural way to interact with Port's AI a
You can interact with agents in two ways:
-1. **Direct messaging** the [Port Slack app](/ai-agents/slack-app). This will use the agent router.
+1. **Direct messaging** the [Port Slack app](/ai-interfaces/ai-agents/slack-app). This will use the agent router.
2. **Mentioning** the app in any channel it's invited to. This will also use the agent router.
When you send a message, the app will:
@@ -277,7 +277,7 @@ AI agents are standard Port entities belonging to the `_ai_agent` blueprint. Thi
You can discover available AI agents in your Port environment in a couple of ways:
-1. **AI Agents Catalog Page**: Navigate to the AI Agents catalog page in Port. This page lists all the agents that have been created in your organization. For more details on creating agents, refer to the [Build an AI agent guide](/ai-agents/build-an-ai-agent).
+1. **AI Agents Catalog Page**: Navigate to the AI Agents catalog page in Port. This page lists all the agents that have been created in your organization. For more details on creating agents, refer to the [Build an AI agent guide](/ai-interfaces/ai-agents/build-an-ai-agent).
2. **Via API**: Programmatically retrieve a list of all AI agents using the Port API. AI agents are entities of the `_ai_agent` blueprint. You can use the [Get all entities of a blueprint API endpoint](https://docs.port.io/api-reference/get-all-entities-of-a-blueprint) to fetch them, specifying `_ai_agent` as the blueprint identifier.
@@ -379,7 +379,7 @@ For **MCP Server Backend Mode:**
AI agent interactions in Port are designed with security and privacy as a priority.
-For more information on security and data handling, see our [AI agents overview](/ai-agents/overview#security-and-data-handling).
+For more information on security and data handling, see our [AI agents overview](/ai-interfaces/ai-agents/overview#security-and-data-handling).
## Troubleshooting & FAQ
@@ -409,7 +409,7 @@ We're working on adding direct interaction through the Port UI in the future.
Each agent has optional conversation starters to help you understand what it can help with. The questions you can ask depend on which agents were built in your organization.
-For information on building agents with specific capabilities, see our [Build an AI agent](/ai-agents/build-an-ai-agent) guide.
+For information on building agents with specific capabilities, see our [Build an AI agent](/ai-interfaces/ai-agents/build-an-ai-agent) guide.
@@ -435,7 +435,7 @@ Remember that AI agents are constantly learning and improving, but they're not i
My agent isn't responding in Slack (Click to expand)
Ensure that:
-- The [Port Slack app](/ai-agents/slack-app) is properly installed in your workspace.
+- The [Port Slack app](/ai-interfaces/ai-agents/slack-app) is properly installed in your workspace.
- The app has been invited to the channel where you're mentioning it.
- You're correctly mentioning the app (@Port).
- You've completed the authentication flow with the app.
diff --git a/docs/ai-agents/overview.md b/docs/ai-interfaces/ai-agents/overview.md
similarity index 91%
rename from docs/ai-agents/overview.md
rename to docs/ai-interfaces/ai-agents/overview.md
index 7db895db38..5fab939913 100644
--- a/docs/ai-agents/overview.md
+++ b/docs/ai-interfaces/ai-agents/overview.md
@@ -43,7 +43,7 @@ When using the MCP server backend mode, your AI agents gain:
- **Broader tool access**: Uses all read-only tools available in the MCP server for comprehensive insights
- **Smarter action selection**: Still respects your configured allowed actions while providing better context
-Your existing agents can immediately benefit from these enhancements by enabling the MCP server backend mode when [interacting with them](/ai-agents/interact-with-ai-agents) through widgets and API calls.
+Your existing agents can immediately benefit from these enhancements by enabling the MCP server backend mode when [interacting with them](/ai-interfaces/ai-agents/interact-with-ai-agents) through widgets and API calls.
### Example use cases
@@ -61,18 +61,18 @@ Your existing agents can immediately benefit from these enhancements by enabling
To start working with AI agents, follow these steps:
1. **Apply for access** - Submit your application via [this form](https://forms.gle/krhMY7c9JM8MyJJf7).
-2. **Access the feature** - If accepted, you will be able to [activate the AI agents](/ai-agents/overview#access-to-the-feature) in your Port organization.
-3. **Build your agents** - [Create custom agents](/ai-agents/build-an-ai-agent) to meet your developers' needs.
-4. **Interact with your agents** - Engage with your agents by following our [interaction guide](/ai-agents/interact-with-ai-agents).
+2. **Access the feature** - If accepted, you will be able to [activate the AI agents](/ai-interfaces/ai-agents/overview#access-to-the-feature) in your Port organization.
+3. **Build your agents** - [Create custom agents](/ai-interfaces/ai-agents/build-an-ai-agent) to meet your developers' needs.
+4. **Interact with your agents** - Engage with your agents by following our [interaction guide](/ai-interfaces/ai-agents/interact-with-ai-agents).
## Customization and control
-[Build and customize](/ai-agents/build-an-ai-agent) your AI agents:
+[Build and customize](/ai-interfaces/ai-agents/build-an-ai-agent) your AI agents:
- Define which data sources your agents can access.
- Determine what actions your agents can assist with.
- Set permissions for who can use specific agents.
- Configure how agents integrate with your workflows.
-- Choose between standard and MCP server backend modes when [interacting with agents](/ai-agents/interact-with-ai-agents).
+- Choose between standard and MCP server backend modes when [interacting with agents](/ai-interfaces/ai-agents/interact-with-ai-agents).
## Security and data handling
@@ -101,9 +101,9 @@ Your organization now has the system blueprints required for the feature to work
## Data Model
The data model of AI agents includes two main blueprints:
-1. **AI agents** - The agents themselves that you can interact with. You can build new ones and customize them as you wish. Learn more in our [Build an AI agent](/ai-agents/build-an-ai-agent) guide.
+1. **AI agents** - The agents themselves that you can interact with. You can build new ones and customize them as you wish. Learn more in our [Build an AI agent](/ai-interfaces/ai-agents/build-an-ai-agent) guide.
-2. **AI invocations** - Each interaction made with an AI agent is recorded as an invocation. This acts as a log of everything going through your AI agents so you can monitor and improve them over time. Learn more in our [Interact with AI agents](/ai-agents/interact-with-ai-agents) guide.
+2. **AI invocations** - Each interaction made with an AI agent is recorded as an invocation. This acts as a log of everything going through your AI agents so you can monitor and improve them over time. Learn more in our [Interact with AI agents](/ai-interfaces/ai-agents/interact-with-ai-agents) guide.
## Relevant guides
@@ -139,7 +139,7 @@ Port AI supports two primary interaction types:
How do users interact with Port AI? (Click to expand)
-- Primary interface is through our [Slack app](/ai-agents/slack-app).
+- Primary interface is through our [Slack app](/ai-interfaces/ai-agents/slack-app).
- Full [API availability](/api-reference/port-api/).
diff --git a/docs/ai-agents/slack-app.md b/docs/ai-interfaces/ai-agents/slack-app.md
similarity index 98%
rename from docs/ai-agents/slack-app.md
rename to docs/ai-interfaces/ai-agents/slack-app.md
index 40290311bd..fc69d2d909 100644
--- a/docs/ai-agents/slack-app.md
+++ b/docs/ai-interfaces/ai-agents/slack-app.md
@@ -31,7 +31,7 @@ This can be used to get quick answers to questions about your resources, such as
- To install the Slack app, you will first need to apply for access to Port's AI program by filling out [this form](https://forms.gle/krhMY7c9JM8MyJJf7).
- To interact with AI agents, you need to have at least one agent deployed in your portal.
- See the [Build an AI agent](https://docs.port.dev/ai-agents/build-an-ai-agent) page to learn more.
+ See the [Build an AI agent](https://docs.port.dev/ai-interfaces/ai-agents/build-an-ai-agent) page to learn more.
## Installation
diff --git a/docs/ai-interfaces/port-mcp-server/_category_.json b/docs/ai-interfaces/port-mcp-server/_category_.json
new file mode 100644
index 0000000000..a49101f8b4
--- /dev/null
+++ b/docs/ai-interfaces/port-mcp-server/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "MCP server",
+ "position": 2
+}
diff --git a/docs/ai-interfaces/port-mcp-server/available-tools.md b/docs/ai-interfaces/port-mcp-server/available-tools.md
new file mode 100644
index 0000000000..1b70f8c7c5
--- /dev/null
+++ b/docs/ai-interfaces/port-mcp-server/available-tools.md
@@ -0,0 +1,91 @@
+---
+sidebar_position: 3
+title: Tools
+---
+
+import Tabs from "@theme/Tabs"
+import TabItem from "@theme/TabItem"
+
+# Available tools
+
+The Port MCP Server exposes different sets of tools based on your role and use case. The tools you see will depend on your permissions in Port.
+
+
+
+
+**Developers** are typically users who consume and interact with the developer portal - querying services, running actions, and analyzing data. These tools help you get information and execute approved workflows.
+
+**Query and analysis tools**
+- **[`get_blueprints`](/api-reference/get-all-blueprints)**: Retrieve a list of all blueprints from Port.
+- **[`get_blueprint`](/api-reference/get-a-blueprint)**: Retrieve information about a specific blueprint by its identifier.
+- **[`get_entities`](/api-reference/get-all-entities-of-a-blueprint)**: Retrieve all entities for a given blueprint.
+- **[`get_entity`](/api-reference/get-an-entity)**: Retrieve information about a specific entity.
+- **[`get_scorecards`](/api-reference/get-all-scorecards)**: Retrieve all scorecards from Port.
+- **[`get_scorecard`](/api-reference/get-a-scorecard)**: Retrieve information about a specific scorecard by its identifier.
+- **[`describe_user_details`](/api-reference/get-organization-details)**: Get information about your Port account, organization, and user profile details.
+- **`search_port_docs_sources`**: Search through Port documentation sources for relevant information.
+- **`ask_port_docs`**: Ask questions about Port documentation and get contextual answers.
+
+**Action execution tools**
+- **[`run_`](/api-reference/execute-a-self-service-action)**: Execute any action you have permission to run in Port.
+
+**AI agent tools**
+- **[`invoke_ai_agent`](/api-reference/invoke-an-agent)**: Invoke a Port AI agent with a specific prompt.
+
+
+
+
+**Builders** are platform engineers or admins who design and configure the developer portal - creating blueprints, setting up scorecards, and managing the overall structure. These tools help you build and maintain the portal.
+
+**All Developer tools**
+Builders have access to all the tools available to Developers (listed above), plus the additional management tools below.
+
+**Blueprint management tools**
+- **[`create_blueprint`](/api-reference/create-a-blueprint)**
+- **[`update_blueprint`](/api-reference/update-a-blueprint)**
+- **[`delete_blueprint`](/api-reference/delete-a-blueprint)**
+
+**Entity management tools**
+- **[`create_entity`](/api-reference/create-an-entity)**
+- **[`update_entity`](/api-reference/update-an-entity)**
+- **[`delete_entity`](/api-reference/delete-an-entity)**
+
+**Scorecard management tools**
+- **[`create_scorecard`](/api-reference/create-a-scorecard)**
+- **[`update_scorecard`](/api-reference/change-scorecards)**
+- **[`delete_scorecard`](/api-reference/delete-a-scorecard)**
+
+
+
+
+## Select which tools to use
+
+By default, when you open a chat with Port MCP, all available tools (based on your permissions) are loaded and ready to use. However, you can customize which tools are available if you want to focus on specific workflows.
+
+For example, if you only want to query data from Port without building or modifying anything, you can limit the tools to just the read-only ones. This helps reduce complexity and ensures you don't accidentally make changes.
+
+
+
+
+In Cursor, you can customize which tools are available through the UI after connecting to Port MCP. Once connected, you can select specific tools as shown below.
+
+
+
+
+
+
+In VSCode, you can choose the tools through the UI after connecting to Port MCP.
+
+
+
+
+
+
+When creating a custom connector in Claude, you can specify exactly which tools to expose instead of enabling everything.
+
+
+
+Refer to the [Claude custom connector documentation](https://support.anthropic.com/en/articles/11175166-getting-started-with-custom-connectors-using-remote-mcp) for detailed instructions.
+
+
+
diff --git a/docs/ai-interfaces/port-mcp-server/overview-and-installation.md b/docs/ai-interfaces/port-mcp-server/overview-and-installation.md
new file mode 100644
index 0000000000..d8464a8a2a
--- /dev/null
+++ b/docs/ai-interfaces/port-mcp-server/overview-and-installation.md
@@ -0,0 +1,318 @@
+---
+sidebar_position: 1
+title: Overview & Installation
+---
+
+import Tabs from "@theme/Tabs"
+import TabItem from "@theme/TabItem"
+
+# Port MCP server
+
+
+
+
+
+
+
+
+The Port Model Context Protocol (MCP) Server acts as a bridge, enabling Large Language Models (LLMs)—like those powering Claude, Cursor, or GitHub Copilot—to interact directly with your Port.io developer portal. This allows you to leverage natural language to query your software catalog, analyze service health, manage resources, and even streamline development workflows, all from your preferred interfaces.
+
+:::info AI Agents vs. MCP Server
+The Port MCP Server is currently in open beta and provides significant standalone value, independent of our [AI Agents feature](/ai-interfaces/ai-agents/overview). Port AI Agents are currently in closed beta with limited access, while the MCP Server gives you immediate access to streamline building in Port, query your catalog, analyze service health, and streamline development workflows using natural language.
+
+While the MCP Server can interact with Port AI Agents when available, the core MCP functionality can be used freely without requiring access to the closed beta AI Agents feature.
+:::
+
+## Why integrate LLMs with your developer portal?
+
+The primary advantage of the Port MCP Server is the ability to bring your developer portal's data and actions into the conversational interfaces you already use. This offers several benefits:
+
+* **Reduced Context Switching:** Access Port information and initiate actions without leaving your IDE or chat tool.
+* **Increased Efficiency:** Get answers and perform tasks faster using natural language commands.
+* **Improved Developer Experience:** Make your developer portal more accessible and intuitive to interact with.
+* **Enhanced Data-Driven Decisions:** Easily pull specific data points from Port to inform your work in real-time.
+
+As one user put it:
+
+> "It would be interesting to build a use case where a developer could ask Copilot from his IDE about stuff Port knows about, without actually having to go to Port."
+
+The Port MCP Server directly enables these kinds of valuable, in-context interactions.
+
+## Key capabilities and use-cases
+
+
+
+
+
+
+
+The Port MCP Server enables you to interact with your Port data and capabilities directly through natural language within your chosen LLM-powered tools. Here's what you can achieve:
+
+### Find information quickly
+
+Effortlessly query your software catalog and get immediate answers. This eliminates the need to navigate through UIs or write complex API queries when you need information.
+
+* Ask: "Who is the owner of service X?"
+* Ask: "How many services do we have in production?"
+* Ask: "Show me all the microservices owned by the Backend team."
+* Ask: "What are the dependencies of the 'OrderProcessing' service?"
+
+
+
+### Vibe-build in Port
+
+Leverage Claude's capabilities to manage and build your entire Port software catalog. You can create and configure blueprints, set up self-service actions, design scorecards, and more.
+
+* Ask: "Please help me apply this guide into my Port instance - [[guide URL]]"
+* Ask: "I want to start managing my k8s deployments, how can we build it in Port?"
+* Ask: "I want a new production readiness scorecard to track the code quality and service alerts"
+* Ask: "Create a new self-service action in Port to scaffold a new service"
+
+
+
+### Analyze scorecards and quality
+
+Gain insights into service health, compliance, and quality by leveraging Port's scorecard data. Identify areas for improvement and track progress against your standards.
+
+* Ask: "Which services are failing our security requirements scorecard?"
+* Ask: "What's preventing the 'InventoryService' from reaching Gold level in the 'Production Readiness' scorecard?"
+* Ask: "Show me the bug count vs. test coverage for all Java microservices."
+
+
+
+* Ask: "Which of our services are missing critical monitoring dashboards?"
+
+
+
+### Streamline development and operations
+
+Receive assistance with common development and operational tasks, directly within your workflow.
+
+* Ask: "What do I need to do to set up a new 'ReportingService'?"
+* Ask: "Guide me through creating a new component blueprint with 'name', 'description', and 'owner' properties."
+* Ask: "Help me add a rule to the 'Tier1Services' scorecard that requires an on-call schedule to be defined."
+
+
+
+### Find your own use cases
+
+You can use Port's MCP to find the use cases that will be valuable to you. Try using this prompt: "think of creative prompts I can use to showcase the power of Port's MCP, based on the data available in Port"
+
+
+## Installing Port MCP
+
+Installing Port's MCP is simple. Follow the instructions for your preferred tool, or learn about the archived local MCP server.
+
+
+
+To connect Cursor to Port's remote MCP, follow these steps:
+
+1. **Go to Cursor settings, click on Tools & Integrations, and add a new MCP server**
+
+
+
+2. **Add the above configuration**
+
+Use the appropriate configuration for your region:
+
+
+
+```json showLineNumbers
+{
+ "mcpServers": {
+ "port-eu": {
+ "url": "https://mcp.port.io/v1"
+ }
+ }
+}
+```
+
+
+```json showLineNumbers
+{
+ "mcpServers": {
+ "port-us": {
+ "url": "https://mcp.us.port.io/v1"
+ }
+ }
+}
+```
+
+
+
+
+
+3. **Login to Port**
+Click on "Needs login", and complete the authentication flow in the window that opens up.
+
+
+4. **See the MCP tools**
+After successfully connecting to Port, you'll see the list of available tools from the MCP.
+
+
+:::warning Authentication window behavior
+In some cases, after clicking "Accept" in the authentication popup, the window won't get closed but the connection is established successfully. You can safely close the window.
+
+If you still don't see the tool, try it a couple of times. We are aware of this behavior and working to improve it.
+:::
+
+
+
+To connect VSCode to Port's remote MCP server, follow these detailed steps. For complete instructions, refer to the [official VS Code MCP documentation](https://code.visualstudio.com/docs/copilot/chat/mcp-servers).
+
+:::info VSCode MCP requirements
+Before proceeding, ensure your VS Code is updated to the latest version and that MCP is enabled for your GitHub organization. You may need to enable "Editor preview features" under Settings > Code, planning, and automation > Copilot via admin access from your organization.
+:::
+
+:::tip Prerequisites
+This configuration uses the open-source `mcp-remote` package, which requires Node.js to be installed on your system. Before using the configuration, ensure Node.js is available by running:
+
+```bash
+npx -y mcp-remote --help
+```
+
+If you encounter errors:
+- **Missing Node.js**: Install Node.js from [nodejs.org](https://nodejs.org/)
+- **Network issues**: Check your internet connection and proxy settings
+- **Permission issues**: You may need to run with appropriate permissions
+:::
+
+
+**Step 1: Configure MCP Server Settings**
+
+1. Open VS Code settings
+2. Search for "MCP: Open user configuration" (or follow the instructions on a workspace installation)
+3. Add the server configuration using the appropriate configuration for your region:
+
+
+
+```json showLineNumbers
+{
+ "mcpServers": {
+ "port-vscode-eu": {
+ "command": "npx",
+ "args": [
+ "-y",
+ "mcp-remote",
+ "https://mcp.port.io/v1"
+ ]
+ }
+ }
+}
+```
+
+
+```json showLineNumbers
+{
+ "mcpServers": {
+ "port-vscode-us": {
+ "command": "npx",
+ "args": [
+ "-y",
+ "mcp-remote",
+ "https://mcp.us.port.io/v1"
+ ]
+ }
+ }
+}
+```
+
+
+
+**Step 2: Start the MCP Server**
+
+1. After adding the configuration, click on "Start" to initialize the MCP server
+2. If you don't see the "Start" button, ensure:
+ - Your VS Code version is updated to the latest version
+ - MCP is enabled for your GitHub organization
+ - "Editor preview features" is enabled under Settings > Code, planning, and automation > Copilot
+
+**Step 3: Verify Connection**
+
+1. Once started, you should see the number of available tools displayed
+2. If you don't see the tools count:
+ - Click on "More" to expand additional options
+ - Select "Show output" to view detailed logs
+ - Check the output panel for any error messages or connection issues
+
+**Step 4: Access Port Tools**
+
+1. Start a new chat session in VS Code
+2. Click on the tools icon in the chat interface
+3. You should now see Port tools available for use
+
+
+
+
+
+To connect Claude to Port's remote MCP, you need to create a custom connector. This process does not require a client ID. For detailed instructions, refer to the [official Anthropic documentation on custom connectors](https://support.anthropic.com/en/articles/11175166-getting-started-with-custom-connectors-using-remote-mcp).
+
+When prompted for the remote MCP server URL, use the appropriate URL for your region:
+
+
+
+```
+https://mcp.port.io/v1
+```
+
+
+```
+https://mcp.us.port.io/v1
+```
+
+
+
+
+The local MCP server is an open-source project that you can run on your own infrastructure. It offers a similar set of capabilities, but requires manual setup and maintenance.
+
+While you can use it, we are no longer supporting it and are not tracking the project issues and activities.
+
+
Prerequisites
+
+- A Port.io account with appropriate permissions.
+- Your Port credentials (Client ID and Client Secret). You can create these from your Port.io dashboard under Settings > Credentials.
+
+
Installation
+
+The Port MCP Server can be installed using Docker or `uvx` (a package manager for Python). While the setup is straightforward, the specifics can vary based on your chosen MCP client (Claude, Cursor, VS Code, etc.).
+
+:::info Detailed Installation Guide
+For comprehensive, step-by-step installation instructions for various platforms and methods (Docker, UVX), please refer to the **[Port MCP Server GitHub README](https://github.com/port-labs/port-mcp-server)**.
+The README provides the latest configuration details and examples for different setups.
+:::
+
+
+
+## Token-based authentication
+
+You can also connect using token-based authentication for automated environments like CI/CD pipelines where interactive authentication isn't possible:
+
+```bash
+curl -X POST "https://api.getport.io/v1/auth/access_token" \
+ -H "Content-Type: application/json" \
+ -d '{"clientId":"YOUR_CLIENT_ID","clientSecret":"YOUR_CLIENT_SECRET"}'
+```
+
+For complete examples and detailed setup instructions, see our [token-based authentication guide](./token-based-authentication).
+
+## Connecting to AI Agents
+
+To connect the Port MCP server to AI agents in CI/CD environments or other automated contexts where interactive authentication isn't possible, see our [token-based authentication](./token-based-authentication).
diff --git a/docs/ai-interfaces/port-mcp-server/prompts.md b/docs/ai-interfaces/port-mcp-server/prompts.md
new file mode 100644
index 0000000000..078c8acbfc
--- /dev/null
+++ b/docs/ai-interfaces/port-mcp-server/prompts.md
@@ -0,0 +1,440 @@
+---
+sidebar_position: 4
+title: Prompts
+---
+
+import Tabs from "@theme/Tabs"
+import TabItem from "@theme/TabItem"
+
+# Prompts
+
+
+
+
+
+
+
+
+Port allows you to centrally manage reusable prompts and expose them to your users via the MCP Server. Once defined, prompts become available in supported MCP clients (for example, Cursor or Claude) where developers and AI agents can discover and run them with the required inputs.
+
+#### Common use cases
+
+- Automate on-call runbooks and incident-triage guidance
+- Standardize code review or deployment checklists
+- Generate structured updates and communications (e.g., incident status, release notes)
+
+#### Set up the data model
+
+1. Go to the [Builder page](https://app.getport.io/settings/data-model) in your portal.
+2. Click **+ Blueprint**.
+3. Click the `{...}` button in the top-right corner and choose **Edit JSON**.
+4. Paste the following JSON schema into the editor:
+
+
+ Prompt blueprint JSON (click to expand)
+
+ ```json showLineNumbers
+ {
+ "identifier": "prompt",
+ "title": "Prompt",
+ "icon": "Microservice",
+ "ownership": {
+ "type": "Direct",
+ "title": "Owning Teams"
+ },
+ "schema": {
+ "properties": {
+ "description": {
+ "type": "string",
+ "title": "Description"
+ },
+ "arguments": {
+ "items": {
+ "type": "object",
+ "properties": {
+ "name": {
+ "type": "string",
+ "description": "The name of the argument parameter"
+ },
+ "description": {
+ "type": "string",
+ "description": "A description of what this argument is for"
+ },
+ "required": {
+ "type": "boolean",
+ "description": "Whether this argument is required or optional",
+ "default": false
+ }
+ },
+ "required": [
+ "name",
+ "description"
+ ]
+ },
+ "type": "array",
+ "title": "Arguments"
+ },
+ "template": {
+ "icon": "DefaultProperty",
+ "type": "string",
+ "title": "Prompt Template",
+ "format": "markdown"
+ }
+ },
+ "required": [
+ "description",
+ "template"
+ ]
+ },
+ "mirrorProperties": {},
+ "calculationProperties": {},
+ "aggregationProperties": {},
+ "relations": {}
+ }
+ ```
+
+
+:::info Where prompts appear
+After this blueprint exists and you create entities for it, prompts will show up in supported MCP clients connected to your Port organization. In clients that surface MCP prompts, you’ll see them listed and ready to run with arguments.
+:::
+
+#### Create prompts
+
+Create entities of the `prompt` blueprint for each prompt you want to expose. At minimum, provide `description` and `template`. Optionally add `arguments` to parameterize the prompt.
+
+
+
+
+1. Go to the [Prompts page](https://app.getport.io/prompts) in your portal.
+2. Click **Create prompt**.
+3. Fill out the form:
+ - Provide a title and description.
+ - Write the prompt template (supports markdown).
+ - Define any `arguments` (optional) with `name`, `description`, and whether they are `required`.
+
+
+
+:::info Template and placeholders
+The `template` supports markdown and variable placeholders. Each argument defined in `arguments` is exposed by its `name` and can be referenced as `{{name}}` inside the template. When you run the prompt, the MCP Server collects values for required arguments and substitutes them into the matching placeholders before execution.
+:::
+
+
+
+
+You can create a Self-Service Action in Port to allow your users to create prompts themselves.
+
+1. Go to the [self-service](https://app.getport.io/self-serve) page of your portal.
+2. Click on `+ New Action`.
+3. Click on the `{...} Edit JSON` button.
+4. Copy and paste the following JSON configuration:
+
+
+ Create New Prompt action JSON (Click to expand)
+
+ ```json showLineNumbers
+ {
+ "identifier": "create_new_prompt",
+ "title": "Create New Prompt",
+ "icon": "Microservice",
+ "description": "Create prompt templates that appear in MCP clients (Claude, Cursor, VS Code, etc.) connected to your Port organization. Users can select prompts, provide required arguments, and get contextual AI assistance with dynamic data from Port.",
+ "trigger": {
+ "type": "self-service",
+ "operation": "CREATE",
+ "userInputs": {
+ "properties": {
+ "arguments": {
+ "type": "array",
+ "title": "Template Arguments",
+ "description": "Define arguments that users will provide when running this prompt. Each argument becomes available as {{argument_name}} placeholder in the template. Required arguments must be provided before prompt execution.",
+ "items": {
+ "type": "object",
+ "properties": {
+ "name": {
+ "type": "string",
+ "title": "Argument Name",
+ "pattern": "^[a-zA-Z_][a-zA-Z0-9_]*$",
+ "description": "The parameter name that will be substituted in the template using {{name}} syntax (e.g., 'service_name', 'environment', 'incident_id')"
+ },
+ "description": {
+ "type": "string",
+ "title": "Argument Description",
+ "description": "Clear description explaining what this argument represents and how it's used in the prompt context"
+ },
+ "is_required": {
+ "type": "boolean",
+ "title": "Is Required",
+ "default": false,
+ "description": "When true, the MCP client (Claude, Cursor, VS Code) will require this argument before executing the prompt"
+ }
+ }
+ }
+ },
+ "owning_team": {
+ "type": "string",
+ "title": "Owning Team (Optional)",
+ "description": "The team that will own and maintain this prompt template",
+ "format": "entity",
+ "blueprint": "_team"
+ },
+ "prompt_title": {
+ "type": "string",
+ "title": "Prompt Title",
+ "description": "Human-readable name for this prompt (displayed in MCP clients like Claude, Cursor, and VS Code)",
+ "minLength": 3,
+ "maxLength": 50
+ },
+ "prompt_template": {
+ "type": "string",
+ "title": "Prompt Template",
+ "description": "The prompt content with placeholders for dynamic values. Use {{argument_name}} to reference arguments (e.g., 'Analyze service {{service_name}} in {{environment}}'). Supports markdown formatting. The MCP Server substitutes values into {{}} placeholders when the prompt runs.",
+ "minLength": 20,
+ "format": "multi-line"
+ },
+ "prompt_description": {
+ "type": "string",
+ "title": "Description",
+ "description": "Explain what this prompt does and when to use it. This description helps users select the right prompt from the MCP client interface.",
+ "minLength": 10,
+ "maxLength": 500,
+ "format": "multi-line"
+ }
+ },
+ "required": [
+ "prompt_title",
+ "prompt_description",
+ "prompt_template"
+ ],
+ "order": [
+ "prompt_title",
+ "prompt_description",
+ "prompt_template",
+ "arguments",
+ "owning_team"
+ ],
+ "titles": {}
+ },
+ "blueprintIdentifier": "prompt"
+ },
+ "invocationMethod": {
+ "type": "UPSERT_ENTITY",
+ "blueprintIdentifier": "prompt",
+ "mapping": {
+ "identifier": "{{ .inputs.prompt_title | ascii_downcase | gsub(\" \"; \"_\") | gsub(\"[^a-z0-9_]\"; \"\") }}",
+ "title": "{{ .inputs.prompt_title }}",
+ "team": "{{ if (.inputs.owning_team | type) == \"object\" then [.inputs.owning_team.identifier] else [] end }}",
+ "properties": {
+ "template": "{{ .inputs.prompt_template }}",
+ "arguments": "{{ (.inputs.arguments // []) | map({name: .name, description: .description, required: .is_required}) }}",
+ "description": "{{ .inputs.prompt_description }}"
+ }
+ }
+ },
+ "requiredApproval": false
+ }
+ ```
+
+
+
+:::tip Developer self-service
+This Self-Service Action allows developers to create their own prompts without needing direct access to Port's data model. The action validates input and automatically creates properly formatted prompt entities.
+:::
+
+
+
+
+#### Examples
+
+
+
+
+A prompt to assists on-call engineers by summarizing recent alerts and deploys related to an incident, then suggesting next steps and linking relevant runbooks.
+
+**Example prompt execution:**
+```markdown
+You are assisting with an incident in the payment-service service (production).
+Incident ID: INC-2024-001
+
+For the last 24 hours:
+- Summarize critical alerts and recent deploys
+- Suggest next steps and owners
+- Link relevant dashboards/runbooks
+```
+
+
+Incident triage prompt entity JSON (Click to expand)
+
+```json showLineNumbers
+{
+ "identifier": "incident_response_assistant",
+ "title": "Incident Response Assistant",
+ "team": [],
+ "properties": {
+ "description": "Assists with incident response by summarizing critical alerts, recent deploys, and suggesting next steps with relevant dashboards and runbooks",
+ "arguments": [
+ {
+ "name": "service_name",
+ "required": true,
+ "description": "The name of the service experiencing the incident"
+ },
+ {
+ "name": "environment",
+ "required": false,
+ "description": "The environment where the incident is occurring (e.g., production, staging)"
+ },
+ {
+ "name": "incident_id",
+ "required": true,
+ "description": "The unique identifier for the incident"
+ },
+ {
+ "name": "timeframe",
+ "required": false,
+ "description": "The time period to analyze (e.g., '24 hours', '1 week')"
+ }
+ ],
+ "template": "You are assisting with an incident in the {{service_name}} service ({{environment}}).\nIncident ID: {{incident_id}}\n\nFor the last {{timeframe}}:\n- Summarize critical alerts and recent deploys\n- Suggest next steps and owners\n- Link relevant dashboards/runbooks"
+ },
+ "relations": {},
+ "icon": "Microservice"
+}
+```
+
+
+
+
+
+
+A prompt that guides engineers to remediate failing scorecard rules by explaining each failure, its impact, and providing step-by-step fixes along with ownership suggestions.
+
+**Example prompt execution:**
+```markdown
+For user-management-service, generate remediation steps for failing rules in the "Production Readiness" scorecard.
+
+For each failing rule:
+- What is failing
+- Why it matters
+- Step-by-step remediation
+- Owners and suggested timeline
+```
+
+
+Scorecard remediation prompt entity JSON (Click to expand)
+
+```json showLineNumbers
+{
+ "identifier": "scorecard_remediation_guide",
+ "title": "Scorecard Remediation Guide",
+ "team": [],
+ "properties": {
+ "description": "Generate detailed remediation steps for failing scorecard rules, including what's failing, why it matters, step-by-step fixes, and ownership assignments",
+ "arguments": [
+ {
+ "name": "service_name",
+ "required": true,
+ "description": "The name of the service that needs scorecard remediation"
+ },
+ {
+ "name": "scorecard_name",
+ "required": true,
+ "description": "The name of the scorecard with failing rules"
+ }
+ ],
+ "template": "For {{service_name}}, generate remediation steps for failing rules in the \"{{scorecard_name}}\" scorecard.\n\nFor each failing rule:\n- What is failing\n- Why it matters\n- Step-by-step remediation\n- Owners and suggested timeline"
+ },
+ "relations": {},
+ "icon": "Microservice"
+}
+```
+
+
+
+
+
+
+A prompt to generates a thorough on-call handoff report, highlighting active incidents, key risks, pending actions, and upcoming maintenance for the specified team.
+
+**Example prompt execution:**
+```markdown
+Create an on-call handoff for Platform Engineering for the last past week.
+
+Include:
+- Active incidents and current status
+- Top risks and mitigations
+- Pending actions and owners
+- Upcoming maintenance windows
+```
+
+
+On-Call handoff report prompt entity JSON (Click to expand)
+
+```json showLineNumbers
+{
+ "identifier": "oncall_handoff_report",
+ "title": "On-Call Handoff Report",
+ "team": [],
+ "properties": {
+ "description": "Generate comprehensive on-call handoff documentation including active incidents, risks, pending actions, and upcoming maintenance windows",
+ "arguments": [
+ {
+ "name": "team",
+ "required": true,
+ "description": "The team name for which to create the on-call handoff"
+ },
+ {
+ "name": "timeframe",
+ "required": true,
+ "description": "The time period to cover in the handoff (e.g., 'last 24 hours', 'past week')"
+ }
+ ],
+ "template": "Create an on-call handoff for {{team}} for the last {{timeframe}}.\n\nInclude:\n- Active incidents and current status\n- Top risks and mitigations\n- Pending actions and owners\n- Upcoming maintenance windows"
+ },
+ "relations": {},
+ "icon": "Microservice"
+}
+```
+
+
+
+
+
+
+
+After creating entities, reconnect or refresh your MCP client; your prompts will be available and will prompt for any defined arguments.
+
+#### See prompts in your client
+
+
+
+
+In Cursor, type **/** to open the prompts list. Selecting a prompt opens an input form for its arguments.
+
+
+
+When you select a prompt, Cursor renders fields for the defined `arguments`. Required ones are marked and must be provided.
+
+
+
+
+
+
+In Claude, click the **+** button and choose the prompts option to view the list from your Port organization. Selecting a prompt opens a parameter collection flow.
+
+
+
+Claude will ask for any required arguments before running the prompt and will substitute them into the template.
+
+
+
+
+
+
+
diff --git a/docs/ai-interfaces/port-mcp-server/token-based-authentication.md b/docs/ai-interfaces/port-mcp-server/token-based-authentication.md
new file mode 100644
index 0000000000..30f7f51882
--- /dev/null
+++ b/docs/ai-interfaces/port-mcp-server/token-based-authentication.md
@@ -0,0 +1,111 @@
+---
+sidebar_position: 2
+title: Token-based connection
+---
+
+# Token-based connection to Port MCP server
+
+When integrating the Port MCP Server with AI agents in automated environments like CI/CD pipelines, the authentication approach differs from interactive local usage. This guide explains how to establish secure connections between AI agents and the Port MCP Server without requiring user interaction.
+
+## The Challenge
+
+Interactive OAuth flows that work well for local development become problematic in automated environments because:
+
+- **No User Interaction**: CI/CD pipelines and automated agents can't handle browser-based authentication flows
+- **Security Requirements**: Credentials must be managed securely without exposing them in logs or configurations
+- **Token Management**: Short-lived tokens are preferred for security, but must be programmatically generated
+
+## The Solution
+
+The Port MCP Server supports programmatic authentication using the Client Credentials flow, which enables AI agents to:
+
+1. **Generate short-lived access tokens** using your Port client credentials (`clientId` + `clientSecret`)
+2. **Connect to the remote MCP server** with the generated token for secure API access
+3. **Invoke Port tools** through the MCP interface without user intervention
+
+This approach maintains security while enabling powerful automation capabilities.
+
+## Example: Claude Code in GitHub Actions
+
+Here's a complete example showing how to connect Claude Code to the Port MCP Server within a GitHub Actions workflow:
+
+
+Show example workflow
+
+```yaml title=".github/workflows/claude-code-mcp.yml" showLineNumbers
+name: Port MCP Server Demo with Claude Code
+on: workflow_dispatch
+
+env:
+ PORT_MCP_URL: ${{ vars.PORT_MCP_URL }}
+ PORT_AUTH_BASE_URL: ${{ vars.PORT_AUTH_BASE_URL }}
+
+jobs:
+ demo:
+ runs-on: ubuntu-latest
+ permissions:
+ id-token: write
+ contents: read
+
+ steps:
+ - name: Checkout
+ uses: actions/checkout@v4
+
+ - name: Authenticate with Port
+ id: port-auth
+ run: |
+ response=$(curl -s -X POST "${{ env.PORT_AUTH_BASE_URL }}/auth/access_token" \
+ -H "Content-Type: application/json" \
+ -d '{"clientId":"${{ secrets.PORT_CLIENT_ID }}","clientSecret":"${{ secrets.PORT_CLIENT_SECRET }}"}')
+ token=$(echo "$response" | jq -r '.accessToken')
+ echo "::add-mask::$token"
+ echo "access_token=$token" >> "$GITHUB_OUTPUT"
+
+ - name: Claude Code against Port MCP
+ uses: anthropics/claude-code-action@beta
+ with:
+ anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
+ mode: agent
+ mcp_config: |
+ {
+ "mcpServers": {
+ "port-prod": {
+ "command": "npx",
+ "args": [
+ "mcp-remote",
+ "${{ env.PORT_MCP_URL }}",
+ "--header",
+ "Authorization: Bearer ${{ steps.port-auth.outputs.access_token }}"
+ ]
+ }
+ }
+ }
+ allowed_tools: "mcp__port-prod__list_blueprints,mcp__port-prod__get_entities"
+ direct_prompt: |
+ List all blueprints, then show entities of the "zendesk_ticket" blueprint.
+```
+
+
+### How this workflow works
+
+1. **Authentication** – The `port-auth` step exchanges your Port client credentials for a short-lived access token using the Client Credentials flow
+2. **MCP Connection** – Claude Code connects to the remote MCP server using the `mcp-remote` package, passing the access token in the Authorization header
+3. **Tool Access** – Claude Code can invoke only the specific Port tools listed in `allowed_tools`, ensuring controlled access to your Port instance
+4. **Execution** – The AI agent executes the provided prompt using the available Port tools to query your software catalog
+
+:::tip Customize your integration
+For read-only workflows, limit `allowed_tools` to just the query operations you need.
+Choose the appropriate MCP URL for your Port region (EU: `https://mcp.port.io/v1`, US: `https://mcp.us.port.io/v1`)
+:::
+
+## Adapting for Other AI Agents
+
+While this example focuses on Claude Code, the same authentication pattern can be applied to other AI agents and platforms:
+
+### General Integration Steps
+
+1. **Obtain Port Credentials**: Create a client ID and secret in your Port dashboard.
+2. **Generate Access Token**: Use the Client Credentials flow to get a short-lived token.
+3. **Configure MCP Connection**: Point your AI agent to the remote MCP server with the token.
+4. **Define Tool Permissions**: Specify which Port tools the AI agent can access.
+5. **Execute Workflows**: Let the AI agent interact with your Port data and capabilities.
diff --git a/docs/ai-interfaces/port-mcp-server/troubleshooting.md b/docs/ai-interfaces/port-mcp-server/troubleshooting.md
new file mode 100644
index 0000000000..f21fc4ae63
--- /dev/null
+++ b/docs/ai-interfaces/port-mcp-server/troubleshooting.md
@@ -0,0 +1,47 @@
+---
+sidebar_position: 6
+title: Troubleshooting
+---
+
+# Troubleshooting
+
+If you encounter issues while setting up or using the Port MCP Server, expand the relevant section below:
+
+
+How can I connect to the MCP? (Click to expand)
+
+Refer back to the [setup instructions](./overview-and-installation#installing-port-mcp) for your specific application (Cursor, VSCode, or Claude). Make sure you're using the correct regional URL for your Port organization.
+
+
+
+
+I completed the connection but nothing happens (Click to expand)
+
+Check that you've followed all the [setup steps](./overview-and-installation#installing-port-mcp) correctly for your application. Ensure you're authenticated with Port and have the necessary permissions. If you've followed all the steps and still have issues, please reach out to our support team.
+
+
+
+
+Why do I see an error about too many tools? (Click to expand)
+
+Each self-service action in your Port instance becomes an individual tool (as `run_`). If your organization has many actions, this can result in a large number of tools being available.
+
+While most AI models handle this well, some have restrictions and may limit you to around 40 tools total. If you encounter errors about tool limits:
+
+1. **Reduce the number of tools** by customizing which tools are enabled (see [Select which tools to use](available-tools#select-which-tools-to-use) section above)
+2. **Focus on essential tools** by only enabling the read-only tools you need plus a few key actions
+3. **Contact your Port Admin** to review which actions are essential for your workflow
+
+This is completely normal behavior and doesn't indicate a problem with Port MCP - it's just a limitation of some AI models.
+
+
+
+:::tip Getting Help
+If you continue to experience issues, please reach out to Port support with:
+- Your IDE/application version
+- The specific error messages you're seeing
+- Your Port region (EU/US)
+- Steps you've already tried
+
+This information will help us provide more targeted assistance.
+:::
diff --git a/docs/guides/all/add-rca-context-to-ai-agents.md b/docs/guides/all/add-rca-context-to-ai-agents.md
index 1bf4af8ac8..151e26ad54 100644
--- a/docs/guides/all/add-rca-context-to-ai-agents.md
+++ b/docs/guides/all/add-rca-context-to-ai-agents.md
@@ -24,7 +24,7 @@ For this guide, we will leverage on the [Incident Manager AI agent](/guides/all/
## Prerequisites
This guide assumes you have:
-- A Port account with the [AI agents feature enabled](/ai-agents/overview#access-to-the-feature).
+- A Port account with the [AI agents feature enabled](/ai-interfaces/ai-agents/overview#access-to-the-feature).
- An existing [Incident Manager AI agent](/guides/all/setup-incident-manager-ai-agent) (or similar).
## Set up data model
diff --git a/docs/guides/all/enrich-tasks-with-ai.md b/docs/guides/all/enrich-tasks-with-ai.md
index 06306ac446..a75bc73b35 100644
--- a/docs/guides/all/enrich-tasks-with-ai.md
+++ b/docs/guides/all/enrich-tasks-with-ai.md
@@ -26,14 +26,14 @@ By the end of this guide, your developers will receive automated, contextual ins
## Prerequisites
This guide assumes you have:
-- A Port account with the [AI agents feature enabled](/ai-agents/overview#access-to-the-feature).
+- A Port account with the [AI agents feature enabled](/ai-interfaces/ai-agents/overview#access-to-the-feature).
- Appropriate permissions to create and configure AI agents.
- [Jira integration](/build-your-software-catalog/sync-data-to-catalog/project-management/jira/) configured in your Port instance.
- [GitHub integration](/build-your-software-catalog/sync-data-to-catalog/git/github/) configured in your Port instance.
## Set up data model
-To create an Task Assistant AI agent in Port, we'll need to configure the following components as described in our [Build an AI agent](/ai-agents/build-an-ai-agent) guide:
+To create an Task Assistant AI agent in Port, we'll need to configure the following components as described in our [Build an AI agent](/ai-interfaces/ai-agents/build-an-ai-agent) guide:
- The data sources it will use to answer questions about tasks and their related issues and collaborators.
- The agent configuration that defines its capabilities and conversation starters.
- An automation to analyze the task and trigger the agent.
diff --git a/docs/guides/all/generate-incident-updates-with-ai.md b/docs/guides/all/generate-incident-updates-with-ai.md
index 6c6118d154..48b29929fc 100644
--- a/docs/guides/all/generate-incident-updates-with-ai.md
+++ b/docs/guides/all/generate-incident-updates-with-ai.md
@@ -23,7 +23,7 @@ This guide assumes the following:
- You have a Port account and have completed the [onboarding process](https://docs.port.io/getting-started/overview).
- [PagerDuty integration](https://docs.port.io/build-your-software-catalog/sync-data-to-catalog/incident-management/pagerduty/) is installed in your account.
- [GitHub integration](https://docs.port.io/build-your-software-catalog/sync-data-to-catalog/git/github/) is installed in your account.
-- You have access to [create and configure AI agents](https://docs.port.io/ai-agents/overview#getting-started-with-ai-agents) in Port.
+- You have access to [create and configure AI agents](https://docs.port.io/ai-interfaces/ai-agents/overview#getting-started-with-ai-agents) in Port.
:::tip Alternative integrations
While this guide uses PagerDuty and GitHub, you can adapt it for other incident management tools like OpsGenie or FireHydrant, and other Git providers like GitLab or Azure DevOps.
@@ -394,7 +394,7 @@ We will create two automations to orchestrate the AI-enhanced incident managemen
4. Click `Create` to save the automation.
:::caution Slack token setup
-You will need to add your Slack bot token as a secret in Port. Head to our [guide on how to install the Slack app](https://docs.port.io/ai-agents/slack-app#installation).
+You will need to add your Slack bot token as a secret in Port. Head to our [guide on how to install the Slack app](https://docs.port.io/ai-interfaces/ai-agents/slack-app#installation).
:::
Below is an example notification sent to Slack:
diff --git a/docs/guides/all/generate-zendesk-ticket-summaries-with-ai.md b/docs/guides/all/generate-zendesk-ticket-summaries-with-ai.md
index 3a490faf27..9da380ab49 100644
--- a/docs/guides/all/generate-zendesk-ticket-summaries-with-ai.md
+++ b/docs/guides/all/generate-zendesk-ticket-summaries-with-ai.md
@@ -42,7 +42,7 @@ Example output from the prompt:
## Prerequisites
-- Port remote MCP installed and connected in your IDE (Cursor, Claude, etc.). Follow the setup guide: [Port MCP Server - Setup](/ai-agents/port-mcp-server#setup)
+- Port remote MCP installed and connected in your IDE (Cursor, Claude, etc.). Follow the setup guide: [Port MCP Server - Setup](/ai-interfaces/port-mcp-server/overview-and-installation#installing-port-mcp)
- A Port account and have completed the [onboarding process](https://docs.port.io/getting-started/overview).
- Custom integration to ingest Zendesk tickets using [Port webhooks](/build-your-software-catalog/custom-integration/webhook).
@@ -226,7 +226,7 @@ Replace `` with your Zendesk subdomain, for example: `https://ac
## Create a reusable prompt
-We will now define a prompt entity that your IDE can invoke via [Port MCP](/ai-agents/port-mcp-server#prompts). Once created, you can run it with the ticket ID, and it will gather context and produce a structured summary.
+We will now define a prompt entity that your IDE can invoke via [Port MCP](/ai-interfaces/port-mcp-server/prompts). Once created, you can run it with the ticket ID, and it will gather context and produce a structured summary.
1. Go to the [Prompts page](https://app.getport.io/prompts) in Port.
2. Click `Create prompt`.
@@ -265,7 +265,7 @@ We will now define a prompt entity that your IDE can invoke via [Port MCP](/ai-a
### Test the workflow
1. In Port, make sure there is a `zendesk_ticket` entity whose `identifier` matches a real Zendesk ticket ID you want to summarize.
-2. In your IDE assistant, choose **Port MCP** as the provider as described [here](/ai-agents/port-mcp-server#setup).
+2. In your IDE assistant, choose **Port MCP** as the provider as described [here](/ai-interfaces/port-mcp-server/overview-and-installation#installing-port-mcp).
3. Run the `zendesk_ticket_summary` prompt with `ticket_id` set to that Zendesk ticket ID.
4. The assistant will automatically execute the self-service actions to fetch comments and side conversations, then return a structured summary.
@@ -276,7 +276,7 @@ Summaries can include sensitive customer or internal details, so treat them as i
:::
:::info Using Port MCP prompts
-For setup and capabilities, see the Port MCP Server prompts documentation: [Port MCP Server - Prompts](/ai-agents/port-mcp-server#prompts)
+For setup and capabilities, see the Port MCP Server prompts documentation: [Port MCP Server - Prompts](/ai-interfaces/port-mcp-server/prompts)
:::
## Best practices
diff --git a/docs/guides/all/setup-incident-manager-ai-agent.md b/docs/guides/all/setup-incident-manager-ai-agent.md
index a33bb64fa3..db15effd7d 100644
--- a/docs/guides/all/setup-incident-manager-ai-agent.md
+++ b/docs/guides/all/setup-incident-manager-ai-agent.md
@@ -25,12 +25,12 @@ By the end of this guide, your developers will be able to get information about
## Prerequisites
This guide assumes you have:
-- A Port account with the [AI agents feature enabled](/ai-agents/overview#access-to-the-feature).
+- A Port account with the [AI agents feature enabled](/ai-interfaces/ai-agents/overview#access-to-the-feature).
- Appropriate permissions to create and configure AI agents.
## Set up data model
-To create an Incident Manager AI agent in Port, we'll need to configure two main components as described in our [Build an AI agent](/ai-agents/build-an-ai-agent) guide:
+To create an Incident Manager AI agent in Port, we'll need to configure two main components as described in our [Build an AI agent](/ai-interfaces/ai-agents/build-an-ai-agent) guide:
- The data sources it will use to answer questions about incidents and on-call rotations.
- The agent configuration that defines its capabilities and conversation starters.
@@ -93,7 +93,7 @@ For example, Opsgenie or Firehydrant.
## Interact with the Incident Manager
-You can interact with the Incident Manager AI agent in [several ways](/ai-agents/interact-with-ai-agents).
+You can interact with the Incident Manager AI agent in [several ways](/ai-interfaces/ai-agents/interact-with-ai-agents).
This guide will demonstrate the two main ways.
@@ -135,7 +135,7 @@ Once the widget is set up, you can:
-The Slack integration provides a natural way to interact with the Incident Manager agent. Before using this method, ensure you have installed and configured the **[Port AI Assistant Slack App](/ai-agents/slack-app)**.
+The Slack integration provides a natural way to interact with the Incident Manager agent. Before using this method, ensure you have installed and configured the **[Port AI Assistant Slack App](/ai-interfaces/ai-agents/slack-app)**.
You can interact with the Incident Manager agent in two ways:
1. **Direct message** the Port AI Assistant.
@@ -180,12 +180,12 @@ To get the most out of your Incident Manager agent:
1. **Try it out**: Start with simple queries and see how the agent responds.
2. **Add context**: If the response isn't what you expected, try asking again with more details.
-3. **Troubleshoot**: If you're still not getting the right answers, check our [troubleshooting guide](/ai-agents/interact-with-ai-agents#troubleshooting--faq) for common issues and solutions.
+3. **Troubleshoot**: If you're still not getting the right answers, check our [troubleshooting guide](/ai-interfaces/ai-agents/interact-with-ai-agents#troubleshooting--faq) for common issues and solutions.
## Possible enhancements
You can further enhance the Incident Manager setup by:
-- **Integration expansion**: [Add more data sources](/ai-agents/build-an-ai-agent#step-2-configure-data-access) like Opsgenie or ServiceNow.
-- **Automated notifications**: [Configure the agent](/ai-agents/interact-with-ai-agents#actions-and-automations) to proactively notify about incident updates or escalations.
-- **Custom conversation starters**: Add organization-specific queries to the [conversation starters](/ai-agents/build-an-ai-agent#step-5-add-conversation-starters).
-- **Monitor and improve**: [Check how your developers are interacting](/ai-agents/interact-with-ai-agents#ai-interaction-details) with the agent and improve it according to feedback.
+- **Integration expansion**: [Add more data sources](/ai-interfaces/ai-agents/build-an-ai-agent#step-2-configure-data-access) like Opsgenie or ServiceNow.
+- **Automated notifications**: [Configure the agent](/ai-interfaces/ai-agents/interact-with-ai-agents#actions-and-automations) to proactively notify about incident updates or escalations.
+- **Custom conversation starters**: Add organization-specific queries to the [conversation starters](/ai-interfaces/ai-agents/build-an-ai-agent#step-5-add-conversation-starters).
+- **Monitor and improve**: [Check how your developers are interacting](/ai-interfaces/ai-agents/interact-with-ai-agents#ai-interaction-details) with the agent and improve it according to feedback.
diff --git a/docs/guides/all/setup-platform-request-triage-ai-agent.md b/docs/guides/all/setup-platform-request-triage-ai-agent.md
index e091cee002..fbbbd109b2 100644
--- a/docs/guides/all/setup-platform-request-triage-ai-agent.md
+++ b/docs/guides/all/setup-platform-request-triage-ai-agent.md
@@ -24,8 +24,8 @@ This guide will walk you through setting up a "Platform Request Triage" AI agent
## Prerequisites
This guide assumes you have:
-- A Port account with the [AI agents feature enabled](/ai-agents/overview#access-to-the-feature).
-- The [Port Slack App](/ai-agents/slack-app) installed and configured.
+- A Port account with the [AI agents feature enabled](/ai-interfaces/ai-agents/overview#access-to-the-feature).
+- The [Port Slack App](/ai-interfaces/ai-agents/slack-app) installed and configured.
## Setup
@@ -253,7 +253,7 @@ This action will be used by the AI agent to create new platform requests.
The credential name follows the pattern `__SLACK_APP_BOT_TOKEN_Txxxxxxxxxx`.
- **Channel ID**: Replace `YOUR_CHANNEL_ID` with the ID of the Slack channel where you want to send notifications. You can also use a JQ expression to dynamically select the channel.
- For more details, refer to the [Port Slack App](/ai-agents/slack-app) documentation.
+ For more details, refer to the [Port Slack App](/ai-interfaces/ai-agents/slack-app) documentation.
:::
5. Click `Create`.
diff --git a/docs/guides/all/setup-pr-enricher-ai-agent.md b/docs/guides/all/setup-pr-enricher-ai-agent.md
index 28983175fe..57fdfa2960 100644
--- a/docs/guides/all/setup-pr-enricher-ai-agent.md
+++ b/docs/guides/all/setup-pr-enricher-ai-agent.md
@@ -26,7 +26,7 @@ By the end of this guide, your developers will receive automated, contextual com
## Prerequisites
This guide assumes you have:
-- A Port account with the [AI agents feature enabled](/ai-agents/overview#access-to-the-feature).
+- A Port account with the [AI agents feature enabled](/ai-interfaces/ai-agents/overview#access-to-the-feature).
- Appropriate permissions to create and configure AI agents.
- [GitHub integration](/build-your-software-catalog/sync-data-to-catalog/git/github/) configured in your Port instance.
- [Jira integration](/build-your-software-catalog/sync-data-to-catalog/project-management/jira/) configured in your Port instance.
@@ -443,13 +443,13 @@ To get the most out of your PR Enricher agent:
5. **Test the workflow**: Create test pull requests to verify the entire flow works as expected.
-6. **Troubleshoot**: If you're not getting the expected results, check our [troubleshooting guide](/ai-agents/interact-with-ai-agents#troubleshooting--faq) for common issues and solutions.
+6. **Troubleshoot**: If you're not getting the expected results, check our [troubleshooting guide](/ai-interfaces/ai-agents/interact-with-ai-agents#troubleshooting--faq) for common issues and solutions.
## Possible enhancements
You can further enhance the PR Enricher setup by:
-- **Adding more data sources** like PagerDuty for incident context or [additional Git providers](/ai-agents/build-an-ai-agent#step-2-configure-data-access) for broader repository visibility.
-- **Configuring automated actions** such as [reviewer assignment, PR labeling, or creating follow-up Jira tickets](/ai-agents/interact-with-ai-agents#actions-and-automations).
-- **Customizing risk assessment criteria** to align with your organization's specific guidelines and [monitoring usage patterns](/ai-agents/interact-with-ai-agents#ai-interaction-details).
+- **Adding more data sources** like PagerDuty for incident context or [additional Git providers](/ai-interfaces/ai-agents/build-an-ai-agent#step-2-configure-data-access) for broader repository visibility.
+- **Configuring automated actions** such as [reviewer assignment, PR labeling, or creating follow-up Jira tickets](/ai-interfaces/ai-agents/interact-with-ai-agents#actions-and-automations).
+- **Customizing risk assessment criteria** to align with your organization's specific guidelines and [monitoring usage patterns](/ai-interfaces/ai-agents/interact-with-ai-agents#ai-interaction-details).
diff --git a/docs/guides/all/setup-service-explorer-ai-agent.md b/docs/guides/all/setup-service-explorer-ai-agent.md
index 0a8e2afd4c..b5da8a646b 100644
--- a/docs/guides/all/setup-service-explorer-ai-agent.md
+++ b/docs/guides/all/setup-service-explorer-ai-agent.md
@@ -25,7 +25,7 @@ This guide will walk you through setting up a "Service Explorer" AI agent in Por
## Prerequisites
This guide assumes you have:
-- A Port account with the [AI agents feature enabled](/ai-agents/overview#access-to-the-feature).
+- A Port account with the [AI agents feature enabled](/ai-interfaces/ai-agents/overview#access-to-the-feature).
- Appropriate permissions to create and configure AI agents.
- [GitHub integration](/build-your-software-catalog/sync-data-to-catalog/git/github/) installed.
- **Optional but recommended integrations for richer context**:
@@ -35,7 +35,7 @@ This guide assumes you have:
## Set up
-To create a Service Explorer AI agent in Port, we'll need to configure two main components as described in our [Build an AI agent](/ai-agents/build-an-ai-agent) guide:
+To create a Service Explorer AI agent in Port, we'll need to configure two main components as described in our [Build an AI agent](/ai-interfaces/ai-agents/build-an-ai-agent) guide:
- The data sources it will use to answer questions.
- The agent configuration that defines its capabilities and conversation starters.
@@ -115,7 +115,7 @@ Keep in mind that there is a trade-off between context richness and response tim
## Interact with the Service Explorer
-You can interact with the Service Explorer AI agent in [several ways](/ai-agents/interact-with-ai-agents). This guide will demonstrate the two main ways.
+You can interact with the Service Explorer AI agent in [several ways](/ai-interfaces/ai-agents/interact-with-ai-agents). This guide will demonstrate the two main ways.
@@ -136,7 +136,7 @@ Once the widget is set up, you can ask questions directly in the chat field.
-The Slack integration provides a natural way to interact with the Service Explorer agent. Before using this method, ensure you have installed and configured the **[Port AI Assistant Slack App](/ai-agents/slack-app)**.
+The Slack integration provides a natural way to interact with the Service Explorer agent. Before using this method, ensure you have installed and configured the **[Port AI Assistant Slack App](/ai-interfaces/ai-agents/slack-app)**.
You can interact with the Service Explorer agent in two ways:
1. **Direct message** the Port AI Assistant.
diff --git a/docs/guides/all/setup-task-manager-ai-agent.md b/docs/guides/all/setup-task-manager-ai-agent.md
index 1a9830357a..8cd83a475c 100644
--- a/docs/guides/all/setup-task-manager-ai-agent.md
+++ b/docs/guides/all/setup-task-manager-ai-agent.md
@@ -24,12 +24,12 @@ By the end of this guide, your developers will be able to get information about
## Prerequisites
This guide assumes you have:
-- A Port account with the [AI agents feature enabled](/ai-agents/overview#access-to-the-feature).
+- A Port account with the [AI agents feature enabled](/ai-interfaces/ai-agents/overview#access-to-the-feature).
- Appropriate permissions to create and configure AI agents.
## Set up data model
-To create a Task Manager AI agent in Port, we'll need to configure two main components as described in our [Build an AI agent](/ai-agents/build-an-ai-agent) guide:
+To create a Task Manager AI agent in Port, we'll need to configure two main components as described in our [Build an AI agent](/ai-interfaces/ai-agents/build-an-ai-agent) guide:
- The data sources it will use to answer questions about tasks and pull requests.
- The agent configuration that defines its capabilities and conversation starters.
@@ -99,7 +99,7 @@ For example:
## Interact with the Task Manager
-You can interact with the task manager AI agent in [several ways](/ai-agents/interact-with-ai-agents).
+You can interact with the task manager AI agent in [several ways](/ai-interfaces/ai-agents/interact-with-ai-agents).
This guide will demonstrate the two main ways.
@@ -140,7 +140,7 @@ Once the widget is set up, you can:
-The Slack integration provides a natural way to interact with the Task Manager agent. Before using this method, ensure you have installed and configured the **[Port AI Assistant Slack App](/ai-agents/slack-app)**
+The Slack integration provides a natural way to interact with the Task Manager agent. Before using this method, ensure you have installed and configured the **[Port AI Assistant Slack App](/ai-interfaces/ai-agents/slack-app)**
You can interact with the Task Manager agent in two ways:
1. **Direct message** the Port AI Assistant.
@@ -185,12 +185,12 @@ To get the most out of your Task Manager agent:
1. **Try it out**: Start with simple queries and see how the agent responds.
2. **Add context**: If the response isn't what you expected, try asking again with more details.
-3. **Troubleshoot**: If you're still not getting the right answers, check our [troubleshooting guide](/ai-agents/interact-with-ai-agents#troubleshooting--faq) for common issues and solutions.
+3. **Troubleshoot**: If you're still not getting the right answers, check our [troubleshooting guide](/ai-interfaces/ai-agents/interact-with-ai-agents#troubleshooting--faq) for common issues and solutions.
## Possible enhancements
You can further enhance the Task Manager setup by:
-- **Integration expansion**: [Add more data sources](/ai-agents/build-an-ai-agent#step-2-configure-data-access) like GitLab or Azure DevOps.
-- **Automated notifications**: [Configure the agent](/ai-agents/interact-with-ai-agents#actions-and-automations) to proactively notify about important updates.
-- **Custom conversation starters**: Add organization-specific queries to the [conversation starters](/ai-agents/build-an-ai-agent#step-5-add-conversation-starters).
-- **Monitor and improve**: [Check how your developers are interacting](/ai-agents/interact-with-ai-agents#ai-interaction-details) with the agent and improve it according to feedback.
+- **Integration expansion**: [Add more data sources](/ai-interfaces/ai-agents/build-an-ai-agent#step-2-configure-data-access) like GitLab or Azure DevOps.
+- **Automated notifications**: [Configure the agent](/ai-interfaces/ai-agents/interact-with-ai-agents#actions-and-automations) to proactively notify about important updates.
+- **Custom conversation starters**: Add organization-specific queries to the [conversation starters](/ai-interfaces/ai-agents/build-an-ai-agent#step-5-add-conversation-starters).
+- **Monitor and improve**: [Check how your developers are interacting](/ai-interfaces/ai-agents/interact-with-ai-agents#ai-interaction-details) with the agent and improve it according to feedback.
diff --git a/docs/guides/all/trigger-github-copilot-from-port.md b/docs/guides/all/trigger-github-copilot-from-port.md
index 3de8b6e894..8037be1c26 100644
--- a/docs/guides/all/trigger-github-copilot-from-port.md
+++ b/docs/guides/all/trigger-github-copilot-from-port.md
@@ -23,7 +23,7 @@ By leveraging AI coding agents like Copilot, you can significantly reduce manual
This guide assumes the following:
- You have a Port account and have completed the [onboarding process](https://docs.port.io/getting-started/overview).
- [Port's GitHub app](https://docs.port.io/build-your-software-catalog/sync-data-to-catalog/git/github/) is installed in your account.
-- You have access to [create and configure AI agents](https://docs.port.io/ai-agents/overview#getting-started-with-ai-agents) in Port.
+- You have access to [create and configure AI agents](https://docs.port.io/ai-interfaces/ai-agents/overview#getting-started-with-ai-agents) in Port.
- GitHub Copilot is enabled in your repository.
diff --git a/static/img/ai-agents/PortPromptForm.png b/static/img/ai-agents/PortPromptForm.png
index 460910fee5..4acb99c877 100644
Binary files a/static/img/ai-agents/PortPromptForm.png and b/static/img/ai-agents/PortPromptForm.png differ