diff --git a/AGENTS.md b/AGENTS.md index dcd15948..341df011 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -456,6 +456,11 @@ alembic upgrade head --sql - Add component: Use/extend `src/components/ui/` - Add type: Define in `src/types/` +**Key feature modules:** +- `src/features/settings/` - Settings page components including Models, Teams, Bots +- `src/features/tasks/` - Task management and chat interface +- `src/apis/models.ts` - Model API client (unified models, test connection) + **Environment:** `NEXT_PUBLIC_API_URL` for client-side API calls ### Executor @@ -476,6 +481,52 @@ alembic upgrade head --sql --- +## 🔧 Model Management + +### Model Types + +Wegent supports two types of AI models: + +| Type | Description | Storage | +|------|-------------|---------| +| **Public** | System-provided models, shared across all users | `public_models` table | +| **User** | User-defined private models | `kinds` table (kind='Model') | + +### Model Resolution Order + +When a Bot executes a task, models are resolved in this order: +1. Task-level model override (if `force_override_bot_model` is true) +2. Bot's `bind_model` from `agent_config` +3. Bot's `modelRef` (legacy) +4. Default model + +### Key APIs + +- `GET /api/models/unified` - List all available models (public + user) +- `GET /api/models/unified/{name}` - Get specific model by name +- `POST /api/models/test-connection` - Test model API connection +- `GET /api/models/compatible?agent_name=X` - Get models compatible with agent type + +### Bot Model Binding + +Two ways to bind models to Bots: + +```yaml +# Method 1: Using modelRef (legacy) +spec: + modelRef: + name: model-name + namespace: default + +# Method 2: Using bind_model (recommended) +spec: + agent_config: + bind_model: "my-model" + bind_model_type: "user" # Optional: 'public' or 'user' +``` + +--- + ## 🔒 Security - Never commit credentials - use `.env` files (excluded from git) @@ -549,6 +600,6 @@ docker-compose up -d --build [service] --- -**Last Updated**: 2025-01 -**Wegent Version**: 1.0.7 +**Last Updated**: 2025-07 +**Wegent Version**: 1.0.8 **Maintained by**: WeCode-AI Team diff --git a/docs/en/guides/user/configuring-models.md b/docs/en/guides/user/configuring-models.md index 443375da..1590edfa 100644 --- a/docs/en/guides/user/configuring-models.md +++ b/docs/en/guides/user/configuring-models.md @@ -45,8 +45,21 @@ Model determines "how strong the thinking ability is" ### Relationship with Database Model resources are stored in the following database tables: -- `public_models`: Stores Model configuration information -- `kinds`: Defines the resource type as `Model` +- `public_models`: Stores public Model configurations shared across all users +- `kinds`: Stores user-defined Model configurations (type='user') + +### Model Types + +Wegent supports two types of models: + +| Type | Description | Storage | +|------|-------------|---------| +| **Public** | System-provided models shared across all users | `public_models` table | +| **User** | User-defined private models | `kinds` table | + +When binding models to Bots, the system resolves models in this order: +1. User's private models (type='user') +2. Public models (type='public') --- @@ -432,45 +445,52 @@ When using OpenAI GPT models, you need to configure the following environment va ### Method 1: Configure via Web Interface (Recommended for Beginners) -#### Step 1: Go to Model Configuration Page +#### Step 1: Go to Model Management Page 1. Log in to Wegent Web interface (http://localhost:3000) -2. Go to **Resource Management** → **Model Configuration** -3. Click **Create New Model** button +2. Go to **Settings** → **Models** tab +3. You will see a unified model list displaying both public and user-defined models +4. Click **Create New Model** button to add a new model - + -#### Step 2: Use Preset Template (Recommended) +#### Step 2: Configure Model Details -Above the JSON configuration input box, you will see a "Quick Configuration" area: +In the model creation/edit dialog, configure the following: -📋 **Use Preset Templates for Quick Configuration** +**Basic Information**: +- **Name**: Give the Model a descriptive name (e.g., `my-claude-sonnet`) +- **Display Name**: Optional human-readable name shown in the UI -- Click **[Claude Sonnet 4 Template]** button (primary recommendation) -- Or click **[OpenAI GPT-4 Template]** button (alternative) +**Provider Configuration**: +- **Provider Type**: Select `OpenAI` or `Anthropic` +- **Model ID**: Choose from preset models or enter a custom model ID + - OpenAI presets: `gpt-4`, `gpt-4-turbo`, `gpt-3.5-turbo`, `gpt-4o`, `gpt-4o-mini` + - Anthropic presets: `claude-sonnet-4-20250514`, `claude-3-7-sonnet-20250219`, `claude-3-5-haiku-20241022` -Clicking will automatically fill the complete JSON configuration into the input box. +**Authentication**: +- **API Key**: Enter your API key from the provider + - Use the visibility toggle (👁️) to show/hide the key +- **Base URL**: Optional custom API endpoint (for proxies or self-hosted services) -#### Step 3: Modify API Key +#### Step 3: Test Connection -⚠️ **Important**: Please replace the API Key in the configuration with your actual key +Before saving, use the **Test Connection** feature to verify your configuration: -The API Key in the template is a placeholder, you need to: -1. Find the `ANTHROPIC_AUTH_TOKEN` or `OPENAI_API_KEY` field in the configuration -2. Replace the value with your real API Key obtained from the official site -3. If it's an Anthropic model, it's recommended to also modify `ANTHROPIC_API_KEY` +1. Click the **Test Connection** button +2. The system will send a minimal test request to verify: + - API Key validity + - Model availability + - Network connectivity +3. Results: + - ✅ "Successfully connected to {model}" - Configuration is valid + - ❌ Error message - Check your API key or network settings -#### Step 4: Fill in Other Fields +#### Step 4: Save Configuration -- **Name**: Give the Model a descriptive name (e.g., `claude-sonnet-4-prod`) -- **Namespace**: Usually use `default` -- **JSON Configuration**: Already filled via template, just need to modify API Key +Click **Save** to create or update the Model. -#### Step 5: Submit Configuration - -Click the **Submit** button to create the Model. - -The system will validate the configuration format and will prompt if there are errors. +The model will appear in your model list and can be used in Bot configurations. --- @@ -490,6 +510,32 @@ Import the YAML configuration via the Web interface or API. --- +## 🔄 Model Selection in Tasks + +### Per-Task Model Override + +When creating or sending a task, you can override the Bot's default model: + +1. In the chat interface, look for the **Model Selector** dropdown +2. Select a different model from your available models +3. Optionally enable **Force Override** to ensure this model is used even if the Bot has a configured model + +**Use Cases**: +- Testing with different models without modifying Bot configuration +- Using a more powerful model for complex tasks +- Using a faster/cheaper model for simple tasks + +### Model Resolution Priority + +When a task runs, the model is resolved in this order: + +1. **Task-level override** (if force_override_bot_model is true) +2. **Bot's bind_model** (from agent_config) +3. **Bot's modelRef** (legacy) +4. **Default model** + +--- + ## ✅ Configuration Validation After configuring a Model, **validation is essential** to ensure the configuration is correct and avoid errors during subsequent use. diff --git a/docs/en/guides/user/creating-bots.md b/docs/en/guides/user/creating-bots.md index 258b9414..fc774b9e 100644 --- a/docs/en/guides/user/creating-bots.md +++ b/docs/en/guides/user/creating-bots.md @@ -158,7 +158,32 @@ status: |-------|------|----------|-------------| | `ghostRef` | object | Yes | Ghost resource reference | | `shellRef` | object | Yes | Shell resource reference | -| `modelRef` | object | Yes | Model resource reference | +| `modelRef` | object | No | Model resource reference (optional, can use bind_model instead) | + +#### Model Binding Methods + +There are two ways to bind a model to a Bot: + +**Method 1: Using modelRef (Legacy)** +```yaml +spec: + modelRef: + name: + namespace: default +``` + +**Method 2: Using bind_model in agent_config (Recommended)** +```yaml +spec: + agent_config: + bind_model: "my-custom-model" + bind_model_type: "user" # Optional: 'public' or 'user' +``` + +The `bind_model` approach offers more flexibility: +- Reference models by name without full YAML structure +- Optionally specify model type to avoid naming conflicts +- System auto-detects model type if not specified (user models first, then public) #### Reference Object Format diff --git a/docs/en/guides/user/managing-tasks.md b/docs/en/guides/user/managing-tasks.md index c3b33047..0c54423b 100644 --- a/docs/en/guides/user/managing-tasks.md +++ b/docs/en/guides/user/managing-tasks.md @@ -187,6 +187,20 @@ status: | `prompt` | string | Yes | Detailed task description and requirements | | `teamRef` | object | Yes | Team reference executing the task | | `workspaceRef` | object | Yes | Workspace reference | +| `model_id` | string | No | Model name to override Bot's default model | +| `force_override_bot_model` | boolean | No | Force use of specified model even if Bot has configured model | + +### Per-Task Model Selection + +When creating a task through the Web interface, you can select a different model: + +1. **Model Selector**: In the chat input area, use the model dropdown to select from available models +2. **Force Override**: Enable this option to ensure your selected model is used regardless of Bot configuration + +**Use cases**: +- Test different models without modifying Bot configuration +- Use a more powerful model for complex one-off tasks +- Use a cheaper/faster model for simple queries #### status Section diff --git a/docs/zh/guides/user/configuring-models.md b/docs/zh/guides/user/configuring-models.md index 55939e69..9bb1ad08 100644 --- a/docs/zh/guides/user/configuring-models.md +++ b/docs/zh/guides/user/configuring-models.md @@ -45,8 +45,21 @@ Model 决定"思考能力有多强" ### 与数据库的关系 Model 资源存储在数据库的以下表中: -- `public_models`: 存储 Model 配置信息 -- `kinds`: 定义资源类型为 `Model` +- `public_models`: 存储所有用户共享的公共 Model 配置 +- `kinds`: 存储用户自定义的私有 Model 配置 (type='user') + +### 模型类型 + +Wegent 支持两种类型的模型: + +| 类型 | 说明 | 存储位置 | +|------|------|----------| +| **公共模型** | 系统提供的所有用户共享的模型 | `public_models` 表 | +| **用户模型** | 用户自定义的私有模型 | `kinds` 表 | + +当将模型绑定到 Bot 时,系统按以下顺序解析模型: +1. 用户的私有模型 (type='user') +2. 公共模型 (type='public') --- @@ -432,45 +445,52 @@ sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx ### 方式 1: 通过 Web 界面配置 (推荐新手) -#### 步骤 1: 进入 Model 配置页面 +#### 步骤 1: 进入 Model 管理页面 1. 登录 Wegent Web 界面 (http://localhost:3000) -2. 进入 **资源管理** → **Model 配置** -3. 点击 **创建新 Model** 按钮 +2. 进入 **设置** → **Models** 标签页 +3. 您将看到统一的模型列表,显示公共模型和用户自定义模型 +4. 点击 **创建新模型** 按钮添加新模型 - + -#### 步骤 2: 使用预设模板 (推荐) +#### 步骤 2: 配置模型详情 -在 JSON 配置输入框上方,您会看到 "快速配置" 区域: +在模型创建/编辑对话框中,配置以下内容: -📋 **使用预设模板快速配置** +**基本信息**: +- **名称**: 给 Model 起一个描述性的名称 (如 `my-claude-sonnet`) +- **显示名称**: 可选的在界面上显示的人类可读名称 -- 点击 **[Claude Sonnet 4 模板]** 按钮 (主要推荐) -- 或点击 **[OpenAI GPT-4 模板]** 按钮 (备选) +**提供商配置**: +- **提供商类型**: 选择 `OpenAI` 或 `Anthropic` +- **模型 ID**: 从预设模型中选择或输入自定义模型 ID + - OpenAI 预设: `gpt-4`, `gpt-4-turbo`, `gpt-3.5-turbo`, `gpt-4o`, `gpt-4o-mini` + - Anthropic 预设: `claude-sonnet-4-20250514`, `claude-3-7-sonnet-20250219`, `claude-3-5-haiku-20241022` -点击后会自动填充完整的 JSON 配置到输入框。 +**认证信息**: +- **API Key**: 输入您从提供商获取的 API 密钥 + - 使用可见性切换按钮 (👁️) 显示/隐藏密钥 +- **Base URL**: 可选的自定义 API 端点 (用于代理或自托管服务) -#### 步骤 3: 修改 API Key +#### 步骤 3: 测试连接 -⚠️ **重要**: 请修改配置中的 API Key 为您的实际密钥 +在保存之前,使用 **测试连接** 功能验证您的配置: -模板中的 API Key 是占位符,您需要: -1. 找到配置中的 `ANTHROPIC_AUTH_TOKEN` 或 `OPENAI_API_KEY` 字段 -2. 将值替换为您从官网获取的真实 API Key -3. 如果是 Anthropic 模型,建议同时修改 `ANTHROPIC_API_KEY` +1. 点击 **测试连接** 按钮 +2. 系统会发送一个最小化的测试请求来验证: + - API Key 有效性 + - 模型可用性 + - 网络连通性 +3. 结果: + - ✅ "成功连接到 {模型}" - 配置有效 + - ❌ 错误信息 - 检查您的 API 密钥或网络设置 -#### 步骤 4: 填写其他字段 +#### 步骤 4: 保存配置 -- **名称**: 给 Model 起一个描述性的名称 (如 `claude-sonnet-4-prod`) -- **命名空间**: 通常使用 `default` -- **JSON 配置**: 已通过模板填充,只需修改 API Key +点击 **保存** 创建或更新模型。 -#### 步骤 5: 提交配置 - -点击 **提交** 按钮创建 Model。 - -系统会验证配置格式,如果有错误会提示。 +模型将出现在您的模型列表中,可以在 Bot 配置中使用。 --- @@ -490,6 +510,32 @@ sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx --- +## 🔄 任务中的模型选择 + +### 单任务模型覆盖 + +创建或发送任务时,您可以覆盖 Bot 的默认模型: + +1. 在聊天界面中,找到 **模型选择器** 下拉框 +2. 从可用模型中选择不同的模型 +3. 可选择启用 **强制覆盖** 以确保使用此模型,即使 Bot 已配置了模型 + +**使用场景**: +- 在不修改 Bot 配置的情况下测试不同模型 +- 对复杂任务使用更强大的模型 +- 对简单任务使用更快/更便宜的模型 + +### 模型解析优先级 + +当任务运行时,模型按以下顺序解析: + +1. **任务级覆盖** (如果 force_override_bot_model 为 true) +2. **Bot 的 bind_model** (来自 agent_config) +3. **Bot 的 modelRef** (旧版) +4. **默认模型** + +--- + ## ✅ 配置验证 配置 Model 后,**务必进行验证**以确保配置正确,避免后续使用时出错。 diff --git a/docs/zh/guides/user/creating-bots.md b/docs/zh/guides/user/creating-bots.md index ccbe9192..f73a4d47 100644 --- a/docs/zh/guides/user/creating-bots.md +++ b/docs/zh/guides/user/creating-bots.md @@ -158,7 +158,32 @@ status: |------|------|------|------| | `ghostRef` | object | 是 | Ghost 资源引用 | | `shellRef` | object | 是 | Shell 资源引用 | -| `modelRef` | object | 是 | Model 资源引用 | +| `modelRef` | object | 否 | Model 资源引用 (可选,也可使用 bind_model) | + +#### 模型绑定方式 + +有两种方式将模型绑定到 Bot: + +**方式 1: 使用 modelRef (旧版)** +```yaml +spec: + modelRef: + name: + namespace: default +``` + +**方式 2: 在 agent_config 中使用 bind_model (推荐)** +```yaml +spec: + agent_config: + bind_model: "my-custom-model" + bind_model_type: "user" # 可选: 'public' 或 'user' +``` + +`bind_model` 方式提供更多灵活性: +- 通过名称引用模型,无需完整的 YAML 结构 +- 可选指定模型类型以避免命名冲突 +- 如果未指定,系统会自动检测模型类型 (优先用户模型,然后公共模型) #### 引用对象格式 diff --git a/docs/zh/guides/user/managing-tasks.md b/docs/zh/guides/user/managing-tasks.md index c72c5901..00f975fe 100644 --- a/docs/zh/guides/user/managing-tasks.md +++ b/docs/zh/guides/user/managing-tasks.md @@ -187,6 +187,20 @@ status: | `prompt` | string | 是 | 详细的任务描述和需求 | | `teamRef` | object | 是 | 执行任务的 Team 引用 | | `workspaceRef` | object | 是 | 工作空间引用 | +| `model_id` | string | 否 | 覆盖 Bot 默认模型的模型名称 | +| `force_override_bot_model` | boolean | 否 | 强制使用指定模型,即使 Bot 已配置模型 | + +### 单任务模型选择 + +通过 Web 界面创建任务时,您可以选择不同的模型: + +1. **模型选择器**: 在聊天输入区域,使用模型下拉框从可用模型中选择 +2. **强制覆盖**: 启用此选项以确保无论 Bot 配置如何都使用您选择的模型 + +**使用场景**: +- 在不修改 Bot 配置的情况下测试不同模型 +- 对复杂的一次性任务使用更强大的模型 +- 对简单查询使用更便宜/更快的模型 #### status 部分