diff --git a/PLAN-remove-ollama-cloud-provider.md b/PLAN-remove-ollama-cloud-provider.md new file mode 100644 index 0000000..ca163ce --- /dev/null +++ b/PLAN-remove-ollama-cloud-provider.md @@ -0,0 +1,159 @@ +# Plan: Remove Ollama Cloud Provider + +## Overview +Remove the `ollama-cloud` provider implementation and consolidate all Ollama functionality under the existing `ollama` provider, which already works correctly with Ollama Cloud via the OpenAI-compatible API. + +## Background +- The `ollama` provider (using OpenAI-compatible API) works correctly with Ollama Cloud +- The `ollama-cloud` provider has issues with API endpoints, authentication, and tool calling +- Maintaining two implementations for the same service creates unnecessary complexity + +## Analysis of Current State + +### Files Reference Ollama Cloud: + +1. **Provider Registration** (`src/providers/index.ts`) + - Line 38: `providerFactories.set('ollama-cloud', (options) => new OllamaCloudProvider(options));` + - Line 152: Provider detection logic for `process.env.OLLAMA_CLOUD === 'true'` + - Lines 154-156: Default to Ollama Cloud if environment variable set + +2. **Provider Implementation** (`src/providers/ollama-cloud.ts`) + - Full `OllamaCloudProvider` class (927 lines) + - Uses Ollama's native `/api` endpoint instead of OpenAI-compatible `/v1` + - Complex manual tool call parsing logic + +3. **Configuration Files** (`src/config.ts`) + - Line 26: Provider validation list includes `'ollama-cloud'` + - Line 403: Valid providers include `'ollama-cloud'` + +4. **CLI Options** (`src/index.ts`) + - Line 386: CLI help text mentions `'ollama-cloud'` + +5. **Static Models** (`src/models.ts`) + - Model listings may reference `ollama-cloud` provider + +## Removal Plan + +### Phase 1: Remove Provider Registration and Implementation +1. **Remove provider factory registration** in `src/providers/index.ts` + - Delete line 38: `providerFactories.set('ollama-cloud', ...)` + - Remove provider detection logic for `ollama-cloud` (lines 152-156) + - Update default provider detection to use `ollama` instead + +2. **Delete provider implementation file** + - Remove `src/providers/ollama-cloud.ts` + - Remove any exports/references to `OllamaCloudProvider` + +### Phase 2: Update Configuration and Validation +1. **Update provider validation** in `src/config.ts` + - Remove `'ollama-cloud'` from valid providers list + - Update config validation schema + +2. **Update CLI help and options** + - Remove `'ollama-cloud'` from help text in `src/index.ts` + +### Phase 3: Update Documentation +1. **Update README.md** + - Remove references to `ollama-cloud` provider + - Update provider usage examples + +2. **Update CODI.md** + - Remove Ollama Cloud provider documentation + - Consolidate Ollama provider documentation + +### Phase 4: Testing and Verification +1. **Verify Ollama provider works with Ollama Cloud** + ```bash + OLLAMA_HOST=https://ollama.com pnpm dev --provider ollama --model glm-4.7:cloud + ``` + +2. **Test migration path for existing users** + - Users currently using `ollama-cloud` should switch to `ollama` + - Update config examples and migration guide + +## Migration Guide for Users + +### Before (Old Way) +```bash +# Using ollama-cloud provider +codi --provider ollama-cloud --model glm-4.7:cloud +# Or in config +{ + "provider": "ollama-cloud", + "model": "glm-4.7:cloud" +} +``` + +### After (New Way) +```bash +# Using regular ollama provider with Ollama Cloud +OLLAMA_HOST=https://ollama.com codi --provider ollama --model glm-4.7:cloud +# Or in config +{ + "provider": "ollama", + "model": "glm-4.7:cloud", + "baseUrl": "https://ollama.com/v1" +} +``` + +## Benefits of This Change + +1. **Simplified Codebase**: One provider implementation instead of two +2. **Better Reliability**: Uses proven OpenAI-compatible API +3. **Native Tool Support**: Uses OpenAI's tool calling instead of manual parsing +4. **Reduced Maintenance**: Eliminates complex tool extraction logic +5. **Consistent UX**: Single provider type for all Ollama usage + +## Risks and Mitigations + +### Risk: Breaking Changes for Existing Users +- **Mitigation**: Provide clear migration guide +- **Mitigation**: Add deprecation warning in current version +- **Impact**: Low - users just need to change provider type + +### Risk: Loss of Provider-Specific Features +- **Assessment**: Ollama Cloud provider had no unique features vs Ollama provider +- The Ollama provider already handles all required functionality + +## Implementation Checklist + +- [ ] Remove provider factory registration +- [ ] Delete `src/providers/ollama-cloud.ts` +- [ ] Update provider validation lists +- [ ] Update CLI help text +- [ ] Update documentation +- [ ] Test migration paths +- [ ] Verify Ollama provider works correctly +- [ ] Remove any remaining references + +## Timeline + +**Phase 1** (Provider Removal): 1 hour +**Phase 2** (Configuration Updates): 30 minutes +**Phase 3** (Documentation): 30 minutes +**Phase 4** (Testing): 30 minutes + +**Total Estimated Time**: 2.5 hours + +## Post-Removal Verification + +After removal, verify: +- [ ] `codi --provider ollama` works with local Ollama +- [ ] `OLLAMA_HOST=https://ollama.com codi --provider ollama` works with Ollama Cloud +- [ ] Tool calling works correctly +- [ ] No broken imports or references remain + +## Notes + +The regular Ollama provider is superior because: +- Uses OpenAI-compatible API format (more reliable) +- Supports native tool calling with JSON schema +- Uses proven OpenAI SDK +- Already tested and working with Ollama Cloud + +The `ollama-cloud` provider attempted to use Ollama's native API but: +- Required complex manual tool parsing +- Had authentication issues +- Was less reliable overall + +This consolidation simplifies the codebase while maintaining full functionality. \ No newline at end of file diff --git a/docs/PLUGIN-INVESTIGATION.md b/docs/PLUGIN-INVESTIGATION.md new file mode 100644 index 0000000..11ab6b6 --- /dev/null +++ b/docs/PLUGIN-INVESTIGATION.md @@ -0,0 +1,242 @@ +# Plugin System Investigation + +**GitHub Issue**: #17 +**Status**: Temporarily Disabled +**Last Updated**: 2026-01-26 + +## Current State + +The plugin system is implemented in `src/plugins.ts` with commands in `src/commands/plugin-commands.ts`, but loading is disabled in `src/index.ts` (lines 2469-2474). + +### Implemented Features + +- Plugin loading from `~/.codi/plugins/` directory +- `CodiPlugin` interface supporting: + - Custom tools (via `BaseTool`) + - Custom commands (via `Command`) + - Custom providers (via factory pattern) + - Lifecycle hooks (`onLoad`, `onUnload`) +- Plugin validation and registration +- Commands: `/plugins`, `/plugins info `, `/plugins dir` + +### Why It Was Disabled + +The plugin system was disabled pending investigation of security and stability concerns. Loading arbitrary ESM modules from user directories introduces risks that need careful consideration. + +--- + +## Security Analysis + +### Risk Assessment + +| Risk | Severity | Description | +|------|----------|-------------| +| **Arbitrary Code Execution** | Critical | Plugins run with full Node.js privileges | +| **File System Access** | High | Plugins can read/write any files | +| **Network Access** | High | Plugins can make HTTP requests, open sockets | +| **Process Control** | High | Plugins can spawn child processes | +| **Credential Theft** | High | Plugins could access environment variables, keychains | +| **Supply Chain** | Medium | No verification of plugin source/integrity | + +### Current Mitigations + +1. **Manual Installation**: Users must manually place plugins in `~/.codi/plugins/` +2. **Warning Messages**: Errors during plugin loading are logged (but not blocking) +3. **Interface Validation**: Basic schema validation of the `CodiPlugin` interface + +### Missing Mitigations + +1. **No Sandboxing**: Plugins run in the same process with full access +2. **No Permission System**: No granular control over what plugins can do +3. **No Signature Verification**: No way to verify plugin authenticity +4. **No Version Compatibility**: No check if plugin is compatible with Codi version +5. **No Dependency Resolution**: Plugins with conflicting dependencies could cause issues + +--- + +## Security Recommendations + +### Tier 1: Documentation & Warnings (Minimal Effort) + +1. **Clear Documentation**: Document that plugins run with full privileges +2. **Startup Warning**: Show warning when plugins are loaded +3. **Trust Model**: Document that users should only install plugins from trusted sources + +### Tier 2: Basic Isolation (Medium Effort) + +1. **Separate Process**: Run plugins in child processes with IPC +2. **Permission Prompts**: Prompt user before allowing sensitive operations +3. **Allowlist/Blocklist**: Let users configure which plugins can load + +### Tier 3: Full Sandboxing (High Effort) + +1. **VM Isolation**: Use `vm` module or `isolated-vm` for sandboxing +2. **Capability-Based Security**: Grant plugins specific capabilities +3. **Resource Limits**: Limit CPU, memory, file handles per plugin + +### Recommended Approach + +Start with **Tier 1** (documentation) plus selected elements from **Tier 2**: + +```typescript +// Example: Permission-based loading +interface PluginPermissions { + fileSystem: 'none' | 'read' | 'read-write'; + network: boolean; + subprocess: boolean; + environment: boolean; +} + +// User approves permissions on first load +async function loadPluginWithConsent(pluginDir: string): Promise { + const permissions = readPluginManifest(pluginDir); + const approved = await promptUserPermissions(permissions); + if (!approved) throw new Error('Plugin permissions denied'); + return loadPlugin(pluginDir); +} +``` + +--- + +## API Stability Assessment + +### Stable (Safe for Plugins) + +| Interface | Location | Notes | +|-----------|----------|-------| +| `CodiPlugin` | `src/plugins.ts` | Core plugin interface | +| `BaseTool` | `src/tools/base.ts` | Well-established, used internally | +| `Command` | `src/commands/index.ts` | Stable command structure | +| `ProviderConfig` | `src/types.ts` | Standard provider options | + +### Unstable (May Change) + +| Interface | Location | Risk | +|-----------|----------|------| +| `Agent` | `src/agent.ts` | Internal implementation details | +| `Message`, `ContentBlock` | `src/types.ts` | May evolve with new model features | +| Tool schemas | Various | May add new required fields | + +### Recommendations + +1. **Version the Plugin API**: Plugins declare minimum Codi version +2. **Semantic Versioning**: Breaking changes bump major version +3. **Deprecation Warnings**: Warn before removing plugin APIs +4. **Plugin SDK**: Consider publishing a separate `@codi/plugin-sdk` package + +--- + +## Missing Features + +### Discovery & Installation + +- **npm Registry**: Allow `codi plugin install ` from npm +- **Plugin Marketplace**: Curated list of verified plugins +- **Auto-Update**: Check for plugin updates + +### Management + +- **Enable/Disable**: Toggle plugins without deleting +- **Dependency Resolution**: Handle plugin dependencies +- **Conflict Detection**: Warn if plugins conflict + +### Development + +- **Plugin Template**: `codi plugin create ` scaffolding +- **Testing Utilities**: Mock Codi environment for plugin tests +- **Hot Reload**: Reload plugins without restarting Codi + +--- + +## Roadmap to Re-enablement + +### Phase 1: Documentation (Ready Now) + +1. Create user documentation for plugin security model +2. Add startup warning when plugins are loaded +3. Re-enable plugin loading with warnings + +**Deliverables**: +- Update CLAUDE.md with plugin security notes +- Add `--plugins` / `--no-plugins` CLI flags +- Console warning: "Plugins loaded. Plugins have full system access." + +### Phase 2: Basic Safety (1-2 weeks) + +1. Add plugin manifest with declared permissions +2. Prompt user to approve permissions on first load +3. Store approved plugins list in `~/.codi/approved-plugins.json` + +**Deliverables**: +- `permissions` field in plugin `package.json` +- One-time approval prompt +- `--approve-plugins` flag for CI/automation + +### Phase 3: Process Isolation (2-4 weeks) + +1. Run plugins in child worker processes +2. Use IPC for tool/command registration +3. Timeout and resource limits + +**Deliverables**: +- Worker-based plugin host +- Plugin crash doesn't crash Codi +- `max-plugin-memory` config option + +### Phase 4: Distribution (Future) + +1. Plugin publishing to npm with `codi-plugin` keyword +2. `codi plugin install/uninstall/update` commands +3. Optional plugin signing/verification + +--- + +## Quick Re-enablement (If Accepting Risk) + +To re-enable the plugin system with current implementation: + +1. Uncomment lines 2471-2474 in `src/index.ts` +2. Add warning message to startup output +3. Document security implications in README + +```typescript +// src/index.ts - Line 2469 +const loadedPlugins = await loadPluginsFromDirectory(); +if (loadedPlugins.length > 0) { + console.log(chalk.yellow('Warning: Plugins loaded. Plugins have full system access.')); + console.log(chalk.dim(`Plugins: ${loadedPlugins.map(p => p.plugin.name).join(', ')}`)); +} +``` + +--- + +## Appendix: Example Plugin + +```javascript +// ~/.codi/plugins/hello-world/index.js +export default { + name: 'hello-world', + version: '1.0.0', + description: 'Example plugin', + + commands: [{ + name: 'hello', + description: 'Say hello', + execute: async (args) => `Hello, ${args || 'world'}!`, + }], + + onLoad: async () => { + console.log('Hello World plugin loaded!'); + }, +}; +``` + +```json +// ~/.codi/plugins/hello-world/package.json +{ + "name": "codi-plugin-hello-world", + "version": "1.0.0", + "main": "index.js", + "type": "module" +} +``` diff --git a/src/cli/history.ts b/src/cli/history.ts new file mode 100644 index 0000000..043b797 --- /dev/null +++ b/src/cli/history.ts @@ -0,0 +1,42 @@ +// Copyright 2026 Layne Penney +// SPDX-License-Identifier: AGPL-3.0-or-later + +/** + * Command history management for the CLI. + */ + +import { existsSync, readFileSync, appendFileSync } from 'node:fs'; +import { homedir } from 'node:os'; +import { join } from 'node:path'; + +export const HISTORY_FILE = process.env.CODI_HISTORY_FILE || join(homedir(), '.codi_history'); +export const MAX_HISTORY_SIZE = 1000; + +/** + * Load command history from file. + * Node.js readline shows index 0 first when pressing UP, so newest must be first. + */ +export function loadHistory(): string[] { + try { + if (existsSync(HISTORY_FILE)) { + const content = readFileSync(HISTORY_FILE, 'utf-8'); + const lines = content.split('\n').filter((line) => line.trim()); + // File has oldest first, newest last. Reverse so newest is at index 0. + return lines.slice(-MAX_HISTORY_SIZE).reverse(); + } + } catch { + // Ignore errors reading history + } + return []; +} + +/** + * Append a command to history file. + */ +export function saveToHistory(command: string): void { + try { + appendFileSync(HISTORY_FILE, command + '\n'); + } catch { + // Ignore errors writing history + } +} diff --git a/src/cli/index.ts b/src/cli/index.ts new file mode 100644 index 0000000..2f9eb1c --- /dev/null +++ b/src/cli/index.ts @@ -0,0 +1,27 @@ +// Copyright 2026 Layne Penney +// SPDX-License-Identifier: AGPL-3.0-or-later + +/** + * CLI utilities module. + */ + +export { + HISTORY_FILE, + MAX_HISTORY_SIZE, + loadHistory, + saveToHistory, +} from './history.js'; + +export { + type PipelineInputConfig, + DEFAULT_PIPELINE_INPUT_CONFIG, + isGlobOrFilePath, + resolvePipelineInput, + resolveFileList, +} from './pipeline-input.js'; + +export { + type NonInteractiveResult, + type NonInteractiveOptions, + runNonInteractive, +} from './non-interactive.js'; diff --git a/src/cli/non-interactive.ts b/src/cli/non-interactive.ts new file mode 100644 index 0000000..93e9b3f --- /dev/null +++ b/src/cli/non-interactive.ts @@ -0,0 +1,143 @@ +// Copyright 2026 Layne Penney +// SPDX-License-Identifier: AGPL-3.0-or-later + +/** + * Non-interactive mode execution for single-prompt CLI usage. + */ + +import chalk from 'chalk'; +import type { Agent } from '../agent.js'; +import type { AuditLogger } from '../audit.js'; +import type { BackgroundIndexer } from '../rag/index.js'; +import type { MCPClientManager } from '../mcp/index.js'; +import { spinner } from '../spinner.js'; + +/** + * Non-interactive mode result type for JSON output. + */ +export interface NonInteractiveResult { + success: boolean; + response: string; + toolCalls: Array<{ name: string; input: Record }>; + usage: { inputTokens: number; outputTokens: number } | null; + error?: string; +} + +/** + * Options for non-interactive mode execution. + */ +export interface NonInteractiveOptions { + outputFormat: 'text' | 'json'; + quiet: boolean; + auditLogger: AuditLogger; + ragIndexer: BackgroundIndexer | null; + mcpManager: MCPClientManager | null; + autoSave?: () => void; +} + +/** + * Run Codi in non-interactive mode with a single prompt. + * Outputs result to stdout and exits with appropriate code. + */ +export async function runNonInteractive( + agent: Agent, + prompt: string, + options: NonInteractiveOptions +): Promise { + const { outputFormat, quiet, auditLogger, ragIndexer, mcpManager, autoSave } = options; + + // Disable spinner in quiet mode + if (quiet) { + spinner.setEnabled(false); + } + + // Track tool calls for JSON output + const toolCalls: Array<{ name: string; input: Record }> = []; + let lastUsage: { inputTokens: number; outputTokens: number } | null = null; + + try { + // Suppress normal output in JSON mode, collect for later + let responseText = ''; + + if (outputFormat === 'json') { + // In JSON mode, suppress streaming output - we'll collect it + // Note: The agent's callbacks are already set up, but we need to + // track the response ourselves + } + + // Log user input + auditLogger.userInput(prompt); + + // Run the agent + if (!quiet) { + spinner.thinking(); + } + + const response = await agent.chat(prompt); + responseText = response; + + // Stop spinner + spinner.stop(); + + autoSave?.(); + + // Get usage info from agent's context + const contextInfo = agent.getContextInfo(); + + // Output based on format + if (outputFormat === 'json') { + const result: NonInteractiveResult = { + success: true, + response: responseText, + toolCalls, + usage: lastUsage, + }; + console.log(JSON.stringify(result, null, 2)); + } else { + // Text format - response was already streamed by agent callbacks + // Just add a newline for clean output + if (!responseText.endsWith('\n')) { + console.log(); + } + } + + // Cleanup + if (ragIndexer) { + ragIndexer.shutdown(); + } + if (mcpManager) { + await mcpManager.disconnectAll(); + } + auditLogger.sessionEnd(); + + process.exit(0); + } catch (error) { + spinner.stop(); + + const errorMessage = error instanceof Error ? error.message : String(error); + + if (outputFormat === 'json') { + const result: NonInteractiveResult = { + success: false, + response: '', + toolCalls, + usage: lastUsage, + error: errorMessage, + }; + console.log(JSON.stringify(result, null, 2)); + } else { + console.error(chalk.red('Error: ' + errorMessage)); + } + + // Cleanup + if (ragIndexer) { + ragIndexer.shutdown(); + } + if (mcpManager) { + await mcpManager.disconnectAll(); + } + auditLogger.sessionEnd(); + + process.exit(1); + } +} diff --git a/src/cli/pipeline-input.ts b/src/cli/pipeline-input.ts new file mode 100644 index 0000000..f7f6a09 --- /dev/null +++ b/src/cli/pipeline-input.ts @@ -0,0 +1,247 @@ +// Copyright 2026 Layne Penney +// SPDX-License-Identifier: AGPL-3.0-or-later + +/** + * Pipeline input resolution for file/glob patterns. + */ + +import { existsSync, readFileSync, statSync } from 'node:fs'; +import { glob } from 'node:fs/promises'; +import { join } from 'node:path'; +import { isPathWithinProject } from '../utils/path-validation.js'; + +/** + * Configuration for pipeline input resolution. + */ +export interface PipelineInputConfig { + maxFiles: number; + maxFileSize: number; + maxTotalSize: number; +} + +export const DEFAULT_PIPELINE_INPUT_CONFIG: PipelineInputConfig = { + maxFiles: 20, + maxFileSize: 50000, // 50KB per file + maxTotalSize: 200000, // 200KB total +}; + +/** + * Check if a string looks like a glob pattern or file path. + */ +export function isGlobOrFilePath(input: string): boolean { + // Check for glob patterns + if (input.includes('*') || input.includes('?')) { + return true; + } + // Check if it looks like a file path (starts with ./ or / or contains file extensions) + if (input.startsWith('./') || input.startsWith('/') || input.startsWith('src/')) { + return true; + } + // Check for common file extensions + if (/\.(ts|js|tsx|jsx|py|go|rs|java|md|json|yaml|yml)$/i.test(input)) { + return true; + } + return false; +} + +/** + * Resolve pipeline input to actual file contents. + * If input is a glob pattern or file path, reads the files and returns their contents. + * Otherwise, returns the input as-is. + */ +export async function resolvePipelineInput( + input: string, + config: PipelineInputConfig = DEFAULT_PIPELINE_INPUT_CONFIG +): Promise<{ resolvedInput: string; filesRead: number; truncated: boolean }> { + if (!isGlobOrFilePath(input)) { + return { resolvedInput: input, filesRead: 0, truncated: false }; + } + + const cwd = process.cwd(); + const files: string[] = []; + + // Check if it's a direct file path or a glob pattern + if (input.includes('*') || input.includes('?')) { + // It's a glob pattern + for await (const file of glob(input, { cwd })) { + // Validate each file is within project (handles symlinks) + const fullPath = join(cwd, file); + if (isPathWithinProject(fullPath, cwd)) { + files.push(file); + } + } + } else { + // It's a direct file path + const fullPath = input.startsWith('/') ? input : join(cwd, input); + + // Validate path is within project directory (prevent path traversal) + if (!isPathWithinProject(fullPath, cwd)) { + return { + resolvedInput: `Security error: Path "${input}" resolves outside the project directory.`, + filesRead: 0, + truncated: false + }; + } + + if (existsSync(fullPath)) { + try { + const stat = statSync(fullPath); + if (stat.isFile()) { + files.push(input); + } else if (stat.isDirectory()) { + // If it's a directory, glob for common code files + for await (const file of glob(`${input}/**/*.{ts,js,tsx,jsx,py,go,rs,java,md,json,yaml,yml}`, { cwd })) { + // Validate each file is within project (handles symlinks) + const filePath = join(cwd, file); + if (isPathWithinProject(filePath, cwd)) { + files.push(file); + } + } + } + } catch { + // Ignore stat errors + } + } + } + + if (files.length === 0) { + return { resolvedInput: `No files found matching: ${input}`, filesRead: 0, truncated: false }; + } + + // Sort files for consistent ordering + files.sort(); + + // Limit number of files + const filesToRead = files.slice(0, config.maxFiles); + const truncatedFiles = files.length > config.maxFiles; + + // Read file contents + const contents: string[] = []; + let totalSize = 0; + let truncatedSize = false; + + for (const file of filesToRead) { + const fullPath = file.startsWith('/') ? file : join(cwd, file); + + // Defense in depth: validate path again before reading + if (!isPathWithinProject(fullPath, cwd)) { + contents.push(`\n### File: ${file}\n\`\`\`\n[Skipped: path resolves outside project directory]\n\`\`\`\n`); + continue; + } + + try { + const stat = statSync(fullPath); + if (!stat.isFile()) continue; + + // Check file size + if (stat.size > config.maxFileSize) { + contents.push(`\n### File: ${file}\n\`\`\`\n[File too large: ${(stat.size / 1024).toFixed(1)}KB > ${(config.maxFileSize / 1024).toFixed(0)}KB limit]\n\`\`\`\n`); + continue; + } + + // Check total size limit + if (totalSize + stat.size > config.maxTotalSize) { + truncatedSize = true; + contents.push(`\n### File: ${file}\n\`\`\`\n[Skipped: total size limit reached]\n\`\`\`\n`); + continue; + } + + const content = readFileSync(fullPath, 'utf-8'); + const ext = file.split('.').pop() || ''; + contents.push(`\n### File: ${file}\n\`\`\`${ext}\n${content}\n\`\`\`\n`); + totalSize += stat.size; + } catch (error) { + contents.push(`\n### File: ${file}\n\`\`\`\n[Error reading file: ${error instanceof Error ? error.message : 'Unknown error'}]\n\`\`\`\n`); + } + } + + // Build the resolved input + let resolvedInput = `## Files matching: ${input}\n\nFound ${files.length} file(s)`; + if (truncatedFiles) { + resolvedInput += ` (showing first ${config.maxFiles})`; + } + resolvedInput += `:\n${contents.join('')}`; + + if (truncatedSize) { + resolvedInput += `\n\n[Note: Some files skipped due to total size limit of ${(config.maxTotalSize / 1024).toFixed(0)}KB]`; + } + + return { + resolvedInput, + filesRead: filesToRead.length, + truncated: truncatedFiles || truncatedSize, + }; +} + +/** + * Resolve a glob pattern or file path to a list of files (without reading contents). + * Used for iterative pipeline execution. + */ +export async function resolveFileList( + input: string, + maxFileSize: number = DEFAULT_PIPELINE_INPUT_CONFIG.maxFileSize +): Promise { + if (!isGlobOrFilePath(input)) { + return []; + } + + const cwd = process.cwd(); + const files: string[] = []; + + if (input.includes('*') || input.includes('?')) { + // Glob pattern + for await (const file of glob(input, { cwd })) { + const fullPath = join(cwd, file); + // Validate path is within project (handles symlinks) + if (!isPathWithinProject(fullPath, cwd)) { + continue; + } + try { + const stat = statSync(fullPath); + if (stat.isFile() && stat.size <= maxFileSize) { + files.push(file); + } + } catch { + // Skip files we can't stat + } + } + } else { + // Direct file path + const fullPath = input.startsWith('/') ? input : join(cwd, input); + + // Validate path is within project directory (prevent path traversal) + if (!isPathWithinProject(fullPath, cwd)) { + return []; // Return empty list for invalid paths + } + + if (existsSync(fullPath)) { + try { + const stat = statSync(fullPath); + if (stat.isFile() && stat.size <= maxFileSize) { + files.push(input); + } else if (stat.isDirectory()) { + // If directory, glob for code files + for await (const file of glob(`${input}/**/*.{ts,js,tsx,jsx,py,go,rs,java,md,json,yaml,yml}`, { cwd })) { + const filePath = join(cwd, file); + // Validate each file is within project (handles symlinks) + if (!isPathWithinProject(filePath, cwd)) { + continue; + } + try { + const fileStat = statSync(filePath); + if (fileStat.isFile() && fileStat.size <= maxFileSize) { + files.push(file); + } + } catch { + // Skip + } + } + } + } catch { + // Ignore + } + } + } + + return files.sort(); +} diff --git a/src/commands/model-commands.ts b/src/commands/model-commands.ts index e8cf202..911405d 100644 --- a/src/commands/model-commands.ts +++ b/src/commands/model-commands.ts @@ -158,7 +158,7 @@ export const switchCommand: Command = { if (providerType === 'anthropic') providerType = 'anthropic'; else if (providerType === 'openai') providerType = 'openai'; else if (providerType === 'ollama') providerType = 'ollama'; - else if (providerType === 'ollama cloud') providerType = 'ollama-cloud'; + else if (providerType === 'ollama cloud') providerType = 'ollama'; else if (providerType === 'runpod') providerType = 'runpod'; modelName = firstArg; } @@ -289,7 +289,7 @@ export const modelMapCommand: Command = { console.log(' --global Apply to global config (~/.codi/models.yaml)'); console.log('\nExamples:'); console.log(' /modelmap Show current configuration'); - console.log(' /modelmap add coder ollama-cloud qwen3-coder:480b-cloud'); + console.log(' /modelmap add coder ollama qwen3-coder:480b-cloud'); console.log(' /modelmap add --global fast anthropic claude-3-5-haiku-latest'); console.log(' /modelmap init Create project codi-models.yaml'); console.log(' /modelmap init --global Create global ~/.codi/models.yaml'); @@ -318,7 +318,7 @@ export const modelMapCommand: Command = { } // Validate provider - const validProviders = ['anthropic', 'openai', 'ollama', 'ollama-cloud', 'runpod']; + const validProviders = ['anthropic', 'openai', 'ollama', 'runpod']; if (!validProviders.includes(provider.toLowerCase())) { return `__MODELMAP_ERROR__|Invalid provider "${provider}". Valid: ${validProviders.join(', ')}`; } diff --git a/src/commands/orchestrate-commands.ts b/src/commands/orchestrate-commands.ts index 17c3d48..1ae068b 100644 --- a/src/commands/orchestrate-commands.ts +++ b/src/commands/orchestrate-commands.ts @@ -48,7 +48,6 @@ function generateWorkerId(): string { function normalizeProviderType(name: string): string { const lowered = name.trim().toLowerCase(); - if (lowered === 'ollama cloud') return 'ollama-cloud'; return lowered.replace(/\s+/g, '-'); } diff --git a/src/config.ts b/src/config.ts index b2ef850..a67f96d 100644 --- a/src/config.ts +++ b/src/config.ts @@ -1,5 +1,6 @@ // Copyright 2026 Layne Penney // SPDX-License-Identifier: AGPL-3.0-or-later +// SPDX-License-Identifier: AGPL-3.0-or-later import * as fs from 'fs'; import * as os from 'os'; @@ -23,7 +24,7 @@ const GLOBAL_CONFIG_FILE = path.join(GLOBAL_CONFIG_DIR, 'config.json'); * Can be defined in .codi.json or .codi/config.json in the project root. */ export interface WorkspaceConfig { - /** Provider to use (anthropic, openai, ollama, ollama-cloud, runpod) */ + /** Provider to use (anthropic, openai, ollama, runpod) */ provider?: string; /** Model name to use */ @@ -465,7 +466,7 @@ export function validateConfig(config: WorkspaceConfig): string[] { const warnings: string[] = []; // Validate provider - const validProviders = ['anthropic', 'openai', 'ollama', 'ollama-cloud', 'runpod', 'auto']; + const validProviders = ['anthropic', 'openai', 'ollama', 'runpod', 'auto']; if (config.provider && !validProviders.includes(config.provider)) { warnings.push(`Unknown provider "${config.provider}". Valid: ${validProviders.join(', ')}`); } @@ -739,7 +740,7 @@ export function mergeToolInput( */ export function getExampleConfig(): string { const example: WorkspaceConfig = { - provider: 'ollama-cloud', + provider: 'ollama', model: 'gpt-oss:120b-cloud', autoApprove: ['read_file', 'glob', 'grep', 'list_directory'], approvedPatterns: [], diff --git a/src/index.ts b/src/index.ts index b7f563f..ab10f78 100644 --- a/src/index.ts +++ b/src/index.ts @@ -11,284 +11,28 @@ import { } from './paste-debounce.js'; import { program } from 'commander'; import chalk from 'chalk'; -import { readFileSync, appendFileSync, existsSync, statSync } from 'fs'; +import { readFileSync, existsSync } from 'fs'; import { glob } from 'node:fs/promises'; import { homedir } from 'os'; import { spawn } from 'child_process'; import { join, resolve } from 'path'; import { format as formatUtil } from 'util'; import { getInterruptHandler, destroyInterruptHandler } from './interrupt.js'; -import { isPathWithinProject } from './utils/path-validation.js'; import { parseCommandChain, requestPermissionForChainedCommands } from './bash-utils.js'; - -// History configuration - allow override for testing -const HISTORY_FILE = process.env.CODI_HISTORY_FILE || join(homedir(), '.codi_history'); -const MAX_HISTORY_SIZE = 1000; - -/** - * Load command history from file. - * Node.js readline shows index 0 first when pressing UP, so newest must be first. - */ -function loadHistory(): string[] { - try { - if (existsSync(HISTORY_FILE)) { - const content = readFileSync(HISTORY_FILE, 'utf-8'); - const lines = content.split('\n').filter((line) => line.trim()); - // File has oldest first, newest last. Reverse so newest is at index 0. - return lines.slice(-MAX_HISTORY_SIZE).reverse(); - } - } catch { - // Ignore errors reading history - } - return []; -} - -/** - * Append a command to history file. - */ -function saveToHistory(command: string): void { - try { - appendFileSync(HISTORY_FILE, command + '\n'); - } catch { - // Ignore errors writing history - } -} - -/** - * Configuration for pipeline input resolution. - */ -interface PipelineInputConfig { - maxFiles: number; - maxFileSize: number; - maxTotalSize: number; -} - -const DEFAULT_PIPELINE_INPUT_CONFIG: PipelineInputConfig = { - maxFiles: 20, - maxFileSize: 50000, // 50KB per file - maxTotalSize: 200000, // 200KB total -}; - -/** - * Check if a string looks like a glob pattern or file path. - */ -function isGlobOrFilePath(input: string): boolean { - // Check for glob patterns - if (input.includes('*') || input.includes('?')) { - return true; - } - // Check if it looks like a file path (starts with ./ or / or contains file extensions) - if (input.startsWith('./') || input.startsWith('/') || input.startsWith('src/')) { - return true; - } - // Check for common file extensions - if (/\.(ts|js|tsx|jsx|py|go|rs|java|md|json|yaml|yml)$/i.test(input)) { - return true; - } - return false; -} - -/** - * Resolve pipeline input to actual file contents. - * If input is a glob pattern or file path, reads the files and returns their contents. - * Otherwise, returns the input as-is. - */ -async function resolvePipelineInput( - input: string, - config: PipelineInputConfig = DEFAULT_PIPELINE_INPUT_CONFIG -): Promise<{ resolvedInput: string; filesRead: number; truncated: boolean }> { - if (!isGlobOrFilePath(input)) { - return { resolvedInput: input, filesRead: 0, truncated: false }; - } - - const cwd = process.cwd(); - const files: string[] = []; - - // Check if it's a direct file path or a glob pattern - if (input.includes('*') || input.includes('?')) { - // It's a glob pattern - for await (const file of glob(input, { cwd })) { - // Validate each file is within project (handles symlinks) - const fullPath = join(cwd, file); - if (isPathWithinProject(fullPath, cwd)) { - files.push(file); - } - } - } else { - // It's a direct file path - const fullPath = input.startsWith('/') ? input : join(cwd, input); - - // Validate path is within project directory (prevent path traversal) - if (!isPathWithinProject(fullPath, cwd)) { - return { - resolvedInput: `Security error: Path "${input}" resolves outside the project directory.`, - filesRead: 0, - truncated: false - }; - } - - if (existsSync(fullPath)) { - try { - const stat = statSync(fullPath); - if (stat.isFile()) { - files.push(input); - } else if (stat.isDirectory()) { - // If it's a directory, glob for common code files - for await (const file of glob(`${input}/**/*.{ts,js,tsx,jsx,py,go,rs,java,md,json,yaml,yml}`, { cwd })) { - // Validate each file is within project (handles symlinks) - const filePath = join(cwd, file); - if (isPathWithinProject(filePath, cwd)) { - files.push(file); - } - } - } - } catch { - // Ignore stat errors - } - } - } - - if (files.length === 0) { - return { resolvedInput: `No files found matching: ${input}`, filesRead: 0, truncated: false }; - } - - // Sort files for consistent ordering - files.sort(); - - // Limit number of files - const filesToRead = files.slice(0, config.maxFiles); - const truncatedFiles = files.length > config.maxFiles; - - // Read file contents - const contents: string[] = []; - let totalSize = 0; - let truncatedSize = false; - - for (const file of filesToRead) { - const fullPath = file.startsWith('/') ? file : join(cwd, file); - - // Defense in depth: validate path again before reading - if (!isPathWithinProject(fullPath, cwd)) { - contents.push(`\n### File: ${file}\n\`\`\`\n[Skipped: path resolves outside project directory]\n\`\`\`\n`); - continue; - } - - try { - const stat = statSync(fullPath); - if (!stat.isFile()) continue; - - // Check file size - if (stat.size > config.maxFileSize) { - contents.push(`\n### File: ${file}\n\`\`\`\n[File too large: ${(stat.size / 1024).toFixed(1)}KB > ${(config.maxFileSize / 1024).toFixed(0)}KB limit]\n\`\`\`\n`); - continue; - } - - // Check total size limit - if (totalSize + stat.size > config.maxTotalSize) { - truncatedSize = true; - contents.push(`\n### File: ${file}\n\`\`\`\n[Skipped: total size limit reached]\n\`\`\`\n`); - continue; - } - - const content = readFileSync(fullPath, 'utf-8'); - const ext = file.split('.').pop() || ''; - contents.push(`\n### File: ${file}\n\`\`\`${ext}\n${content}\n\`\`\`\n`); - totalSize += stat.size; - } catch (error) { - contents.push(`\n### File: ${file}\n\`\`\`\n[Error reading file: ${error instanceof Error ? error.message : 'Unknown error'}]\n\`\`\`\n`); - } - } - - // Build the resolved input - let resolvedInput = `## Files matching: ${input}\n\nFound ${files.length} file(s)`; - if (truncatedFiles) { - resolvedInput += ` (showing first ${config.maxFiles})`; - } - resolvedInput += `:\n${contents.join('')}`; - - if (truncatedSize) { - resolvedInput += `\n\n[Note: Some files skipped due to total size limit of ${(config.maxTotalSize / 1024).toFixed(0)}KB]`; - } - - return { - resolvedInput, - filesRead: filesToRead.length, - truncated: truncatedFiles || truncatedSize, - }; -} - -/** - * Resolve a glob pattern or file path to a list of files (without reading contents). - * Used for iterative pipeline execution. - */ -async function resolveFileList( - input: string, - maxFileSize: number = DEFAULT_PIPELINE_INPUT_CONFIG.maxFileSize -): Promise { - if (!isGlobOrFilePath(input)) { - return []; - } - - const cwd = process.cwd(); - const files: string[] = []; - - if (input.includes('*') || input.includes('?')) { - // Glob pattern - for await (const file of glob(input, { cwd })) { - const fullPath = join(cwd, file); - // Validate path is within project (handles symlinks) - if (!isPathWithinProject(fullPath, cwd)) { - continue; - } - try { - const stat = statSync(fullPath); - if (stat.isFile() && stat.size <= maxFileSize) { - files.push(file); - } - } catch { - // Skip files we can't stat - } - } - } else { - // Direct file path - const fullPath = input.startsWith('/') ? input : join(cwd, input); - - // Validate path is within project directory (prevent path traversal) - if (!isPathWithinProject(fullPath, cwd)) { - return []; // Return empty list for invalid paths - } - - if (existsSync(fullPath)) { - try { - const stat = statSync(fullPath); - if (stat.isFile() && stat.size <= maxFileSize) { - files.push(input); - } else if (stat.isDirectory()) { - // If directory, glob for code files - for await (const file of glob(`${input}/**/*.{ts,js,tsx,jsx,py,go,rs,java,md,json,yaml,yml}`, { cwd })) { - const filePath = join(cwd, file); - // Validate each file is within project (handles symlinks) - if (!isPathWithinProject(filePath, cwd)) { - continue; - } - try { - const fileStat = statSync(filePath); - if (fileStat.isFile() && fileStat.size <= maxFileSize) { - files.push(file); - } - } catch { - // Skip - } - } - } - } catch { - // Ignore - } - } - } - - return files.sort(); -} +import { + HISTORY_FILE, + MAX_HISTORY_SIZE, + loadHistory, + saveToHistory, + type PipelineInputConfig, + DEFAULT_PIPELINE_INPUT_CONFIG, + isGlobOrFilePath, + resolvePipelineInput, + resolveFileList, + type NonInteractiveResult, + type NonInteractiveOptions, + runNonInteractive, +} from './cli/index.js'; import { Agent, type ToolConfirmation, type ConfirmationResult, type SecurityWarning } from './agent.js'; import { SecurityValidator, createSecurityValidator } from './security-validator.js'; @@ -383,7 +127,7 @@ program .name('codi') .description('Your AI coding wingman') .version(VERSION, '-v, --version', 'Output the current version') - .option('-p, --provider ', 'Provider to use (anthropic, openai, ollama, ollama-cloud, runpod)', 'auto') + .option('-p, --provider ', 'Provider to use (anthropic, openai, ollama, runpod)', 'auto') .option('-m, --model ', 'Model to use') .option('--base-url ', 'Base URL for API (for self-hosted models)') .option('--endpoint-id ', 'Endpoint ID (for RunPod serverless)') @@ -2603,136 +2347,6 @@ function handleSymbolsOutput(output: string): void { } } -/** - * Non-interactive mode result type for JSON output. - */ -interface NonInteractiveResult { - success: boolean; - response: string; - toolCalls: Array<{ name: string; input: Record }>; - usage: { inputTokens: number; outputTokens: number } | null; - error?: string; -} - -/** - * Options for non-interactive mode execution. - */ -interface NonInteractiveOptions { - outputFormat: 'text' | 'json'; - quiet: boolean; - auditLogger: AuditLogger; - ragIndexer: BackgroundIndexer | null; - mcpManager: MCPClientManager | null; - autoSave?: () => void; -} - -/** - * Run Codi in non-interactive mode with a single prompt. - * Outputs result to stdout and exits with appropriate code. - */ -async function runNonInteractive( - agent: Agent, - prompt: string, - options: NonInteractiveOptions -): Promise { - const { outputFormat, quiet, auditLogger, ragIndexer, mcpManager, autoSave } = options; - - // Disable spinner in quiet mode - if (quiet) { - spinner.setEnabled(false); - } - - // Track tool calls for JSON output - const toolCalls: Array<{ name: string; input: Record }> = []; - let lastUsage: { inputTokens: number; outputTokens: number } | null = null; - - try { - // Suppress normal output in JSON mode, collect for later - let responseText = ''; - - if (outputFormat === 'json') { - // In JSON mode, suppress streaming output - we'll collect it - // Note: The agent's callbacks are already set up, but we need to - // track the response ourselves - } - - // Log user input - auditLogger.userInput(prompt); - - // Run the agent - if (!quiet) { - spinner.thinking(); - } - - const response = await agent.chat(prompt); - responseText = response; - - // Stop spinner - spinner.stop(); - - autoSave?.(); - - // Get usage info from agent's context - const contextInfo = agent.getContextInfo(); - - // Output based on format - if (outputFormat === 'json') { - const result: NonInteractiveResult = { - success: true, - response: responseText, - toolCalls, - usage: lastUsage, - }; - console.log(JSON.stringify(result, null, 2)); - } else { - // Text format - response was already streamed by agent callbacks - // Just add a newline for clean output - if (!responseText.endsWith('\n')) { - console.log(); - } - } - - // Cleanup - if (ragIndexer) { - ragIndexer.shutdown(); - } - if (mcpManager) { - await mcpManager.disconnectAll(); - } - auditLogger.sessionEnd(); - - process.exit(0); - } catch (error) { - spinner.stop(); - - const errorMessage = error instanceof Error ? error.message : String(error); - - if (outputFormat === 'json') { - const result: NonInteractiveResult = { - success: false, - response: '', - toolCalls, - usage: lastUsage, - error: errorMessage, - }; - console.log(JSON.stringify(result, null, 2)); - } else { - console.error(chalk.red('Error: ' + errorMessage)); - } - - // Cleanup - if (ragIndexer) { - ragIndexer.shutdown(); - } - if (mcpManager) { - await mcpManager.disconnectAll(); - } - auditLogger.sessionEnd(); - - process.exit(1); - } -} - /** * CLI entrypoint. * diff --git a/src/model-map/fast-scan.ts b/src/model-map/fast-scan.ts index a7dd709..0a543e5 100644 --- a/src/model-map/fast-scan.ts +++ b/src/model-map/fast-scan.ts @@ -158,7 +158,7 @@ export async function fastScanFiles( ): Promise { // Get fast model provider const roleName = options.fastRole || 'fast'; - const resolved = router.resolveRole(roleName, options.providerContext || 'ollama-cloud'); + const resolved = router.resolveRole(roleName, options.providerContext || 'ollama'); if (!resolved) { throw new Error(`No model found for role '${roleName}'`); diff --git a/src/model-map/loader.ts b/src/model-map/loader.ts index f190036..5dd62bd 100644 --- a/src/model-map/loader.ts +++ b/src/model-map/loader.ts @@ -547,7 +547,7 @@ export function getExampleModelMap(): string { description: 'Free local model', }, 'cloud-coder': { - provider: 'ollama-cloud', + provider: 'ollama', model: 'qwen3-coder:480b-cloud', description: 'Cloud coder', }, @@ -557,12 +557,12 @@ export function getExampleModelMap(): string { description: 'qwen lite', }, 'cloud-fast': { - provider: 'ollama-cloud', + provider: 'ollama', model: 'gemini-3-flash-preview:cloud', description: 'gemini cloud fast', }, 'cloud-reasoning': { - provider: 'ollama-cloud', + provider: 'ollama', model: 'gpt-oss:120b-cloud', description: 'cloud gpt oss 120b-cloud', }, @@ -572,32 +572,29 @@ export function getExampleModelMap(): string { anthropic: 'haiku', openai: 'gpt-5-nano', ollama: 'local', - 'ollama-cloud': 'cloud-fast', }, capable: { anthropic: 'sonnet', openai: 'gpt-5', ollama: 'local', - 'ollama-cloud': 'cloud-coder', }, reasoning: { anthropic: 'opus', openai: 'gpt-5', ollama: 'local', - 'ollama-cloud': 'cloud-reasoning', }, }, tasks: { fast: { - model: 'cloud-fast', + model: 'local', description: 'Quick tasks (commits, summaries)', }, code: { - model: 'cloud-coder', + model: 'local', description: 'Standard coding tasks', }, complex: { - model: 'cloud-reasoning', + model: 'local', description: 'Architecture, debugging', }, summarize: { diff --git a/src/model-map/types.ts b/src/model-map/types.ts index 956f7a8..ccaa5af 100644 --- a/src/model-map/types.ts +++ b/src/model-map/types.ts @@ -11,7 +11,7 @@ * Named model definition with provider and settings. */ export interface ModelDefinition { - /** Provider type (anthropic, openai, ollama, ollama-cloud, runpod) */ + /** Provider type (anthropic, openai, ollama, runpod) */ provider: string; /** Model name/ID */ model: string; @@ -55,7 +55,6 @@ export type ProviderContext = | 'anthropic' | 'openai' | 'ollama' - | 'ollama-cloud' | string; /** diff --git a/src/providers/index.ts b/src/providers/index.ts index ef1f682..dbd6ff5 100644 --- a/src/providers/index.ts +++ b/src/providers/index.ts @@ -4,14 +4,13 @@ import { BaseProvider } from './base.js'; import { AnthropicProvider } from './anthropic.js'; import { OpenAICompatibleProvider, createOllamaProvider, createRunPodProvider } from './openai-compatible.js'; -import { OllamaCloudProvider } from './ollama-cloud.js'; + import { MockProvider } from './mock.js'; import type { ProviderConfig } from '../types.js'; export { BaseProvider } from './base.js'; export { AnthropicProvider } from './anthropic.js'; export { OpenAICompatibleProvider, createOllamaProvider, createRunPodProvider } from './openai-compatible.js'; -export { OllamaCloudProvider } from './ollama-cloud.js'; export { MockProvider } from './mock.js'; export type { MockProviderConfig, MockResponse, MockCall, MockResponsesFile } from './mock.js'; @@ -35,7 +34,6 @@ providerFactories.set('runpod', (options) => createRunPodProvider( options.model || 'default', options.apiKey )); -providerFactories.set('ollama-cloud', (options) => new OllamaCloudProvider(options)); providerFactories.set('mock', () => { // Support file-based configuration for E2E tests const responsesFile = process.env.CODI_MOCK_FILE; @@ -148,14 +146,6 @@ export function detectProvider(): BaseProvider { ); } - // Check if user wants to use Ollama Cloud - const useOllamaCloud = process.env.OLLAMA_CLOUD === 'true' || process.env.CODI_PROVIDER === 'ollama-cloud'; - - if (useOllamaCloud) { - console.log('Using Ollama Cloud provider'); - return new OllamaCloudProvider(); - } - // Default to Ollama for local usage console.log('Using Ollama provider (no API keys found, assuming local)'); return createOllamaProvider(); diff --git a/src/providers/ollama-cloud.ts b/src/providers/ollama-cloud.ts deleted file mode 100644 index c091dc2..0000000 --- a/src/providers/ollama-cloud.ts +++ /dev/null @@ -1,926 +0,0 @@ -// Copyright 2026 Layne Penney -// SPDX-License-Identifier: AGPL-3.0-or-later - -/** - * Ollama Cloud provider implementation using the Ollama API directly. - * Optimized for hosted Ollama services with rate limiting and retry logic. - * Use 'ollama' provider for local usage, 'ollama-cloud' for hosted services. - */ - -import { BaseProvider } from './base.js'; -import { createProviderResponse } from './response-parser.js'; -import { withRetry, type RetryOptions } from './retry.js'; -import { getProviderRateLimiter, type RateLimiter } from './rate-limiter.js'; -import { messageToText } from './message-converter.js'; -import type { Message, ToolDefinition, ProviderResponse, ProviderConfig, ToolCall } from '../types.js'; -import { DEFAULT_FALLBACK_CONFIG, findBestToolMatch } from '../tools/tool-fallback.js'; -import { logger, LogLevel } from '../logger.js'; -import { getOllamaModelInfo } from './ollama-model-info.js'; -import { MODEL_CONTEXT_OVERRIDES } from '../constants.js'; - -/** Ollama message format */ -interface OllamaMessage { - role: 'system' | 'user' | 'assistant'; - content: string; - images?: string[]; -} - -interface OllamaChatRequest { - model: string; - messages: OllamaMessage[]; - stream?: boolean; - format?: string; - options?: { - num_predict?: number; - temperature?: number; - top_k?: number; - top_p?: number; - repeat_penalty?: number; - presence_penalty?: number; - frequency_penalty?: number; - mirostat?: number; - mirostat_tau?: number; - mirostat_eta?: number; - penalize_newline?: boolean; - stop?: string[]; - }; - keep_alive?: string | number; -} - -interface OllamaToolCall { - function: { - name: string; - arguments: Record; - }; -} - -interface OllamaChatResponse { - model: string; - created_at: string; - message: { - role: string; - content: string; - thinking?: string; - tool_calls?: OllamaToolCall[]; - }; - done: boolean; - done_reason?: string; - total_duration?: number; - load_duration?: number; - prompt_eval_count?: number; - prompt_eval_duration?: number; - eval_count?: number; - eval_duration?: number; -} - -interface OllamaModelInfo { - id: string; - name: string; - provider: string; - capabilities: { - vision: boolean; - toolUse: boolean; - }; - pricing: { - input: number; - output: number; - }; -} - -export class OllamaCloudProvider extends BaseProvider { - private readonly baseUrl: string; - private readonly apiKey: string | undefined; - private readonly model: string; - private readonly temperature: number; - private readonly maxTokens: number | undefined; - private readonly retryOptions: RetryOptions; - private readonly rateLimiter: RateLimiter; - private retryCallback?: (attempt: number, error: Error, delayMs: number) => void; - private cachedContextWindow: number | null = null; - private contextWindowPromise: Promise | null = null; - - constructor(config: ProviderConfig & { retry?: RetryOptions } = {}) { - super(config); - - // Default to ollama.com for cloud usage - this.baseUrl = config.baseUrl || process.env.OLLAMA_HOST || 'https://ollama.com'; - // API key for authentication (required for ollama.com) - this.apiKey = process.env.OLLAMA_API_KEY; - this.model = config.model || 'glm-4.7:cloud'; - this.temperature = config.temperature ?? 0.7; - this.maxTokens = config.maxTokens; - // Default retry options: 5 retries with exponential backoff starting at 5s - // Tuned for Ollama cloud rate limits (~1 req/sec) - this.retryOptions = { - maxRetries: 5, - initialDelayMs: 5000, - maxDelayMs: 60000, - backoffMultiplier: 2, - jitter: true, - ...config.retry, - }; - // Get shared rate limiter for Ollama Cloud provider - this.rateLimiter = getProviderRateLimiter('ollama-cloud'); - } - - /** - * Set a callback to be notified when retries occur. - */ - setRetryCallback(callback: (attempt: number, error: Error, delayMs: number) => void): void { - this.retryCallback = callback; - } - - getName(): string { - return 'Ollama Cloud'; - } - - getModel(): string { - return this.model; - } - - supportsToolUse(): boolean { - // Ollama doesn't natively support tool calling, but we can simulate it through structured outputs or parsing - return true; - } - - supportsVision(): boolean { - // Some Ollama models support vision (like LLaVA-based ones) - const modelLower = this.model.toLowerCase(); - return modelLower.includes('llava') || - modelLower.includes('vision') || - modelLower.includes('bakllava'); - } - - /** - * Convert our message format to Ollama's format. - */ - private convertMessages(messages: Message[], systemPrompt?: string): OllamaMessage[] { - const ollamaMessages: OllamaMessage[] = []; - - // Add system prompt if provided - if (systemPrompt) { - ollamaMessages.push({ - role: 'system', - content: systemPrompt, - }); - } - - // Convert messages using shared utility - // messageToText handles all content block types (text, tool_result, tool_use, image) - for (const msg of messages) { - const content = messageToText(msg); - - // Map role to Ollama's expected values - const role: 'system' | 'user' | 'assistant' = - msg.role === 'system' ? 'system' : - msg.role === 'assistant' ? 'assistant' : 'user'; - - ollamaMessages.push({ role, content }); - } - - return ollamaMessages; - } - - async chat( - messages: Message[], - tools?: ToolDefinition[], - systemPrompt?: string - ): Promise { - // Fetch context window on first API call (cached after) - await this.ensureContextWindow(); - - const ollamaMessages = this.convertMessages(messages, systemPrompt); - - const requestBody: OllamaChatRequest = { - model: this.model, - messages: ollamaMessages, - stream: false, - options: { - temperature: this.temperature, - ...(this.maxTokens && { num_predict: this.maxTokens }), - }, - }; - - // Use rate limiter to prevent 429 errors - return this.rateLimiter.schedule(() => - withRetry( - async () => { - const response = await fetch(`${this.baseUrl}/api/chat`, { - method: 'POST', - headers: { - 'Content-Type': 'application/json', - ...(this.apiKey && { 'Authorization': `Bearer ${this.apiKey}` }), - }, - body: JSON.stringify(requestBody), - }); - - if (!response.ok) { - throw new Error(`Ollama API request failed: ${response.status} ${response.statusText}`); - } - - const responseData: OllamaChatResponse = await response.json(); - - // Check for native tool calls first - let toolCalls: ToolCall[] = []; - if (responseData.message?.tool_calls && responseData.message.tool_calls.length > 0) { - toolCalls = responseData.message.tool_calls.map((tc, i) => ({ - id: `ollama_${Date.now()}_${i}`, - name: this.normalizeToolName(tc.function.name), - input: tc.function.arguments, - })); - } - - const rawContent = responseData.message.content || ''; - const thinkingField = responseData.message.thinking || ''; - - // Extract thinking content from tags - const { content: thinkingCleanedContent, thinking: tagThinking } = this.extractThinkingContent( - rawContent - ); - const combinedThinking = [thinkingField, tagThinking].filter(Boolean).join('\n'); - const hasContent = thinkingCleanedContent.trim().length > 0; - const useFallbackContent = !hasContent && combinedThinking.length > 0; - const finalContent = useFallbackContent ? combinedThinking : thinkingCleanedContent; - const reasoningContent = combinedThinking || undefined; - // If no non-thinking content, also look for tools in thinking content - const toolExtractionText = thinkingCleanedContent || combinedThinking; - - // Fall back to extracting tool calls from text if no native calls - if (toolCalls.length === 0 && tools && tools.length > 0) { - toolCalls = this.extractToolCalls(toolExtractionText, tools); - } - - const cleanedContent = this.maybeCleanHallucinatedTraces(finalContent, toolCalls); - - return createProviderResponse({ - content: cleanedContent, - toolCalls, - stopReason: responseData.done_reason, - reasoningContent, - inputTokens: responseData.prompt_eval_count, - outputTokens: responseData.eval_count, - rawResponse: responseData, - }); - }, - { - ...this.retryOptions, - onRetry: this.retryCallback, - } - ) - ); - } - - async streamChat( - messages: Message[], - tools?: ToolDefinition[], - onChunk?: (chunk: string) => void, - systemPrompt?: string, - onReasoningChunk?: (chunk: string) => void - ): Promise { - // Fetch context window on first API call (cached after) - await this.ensureContextWindow(); - - const ollamaMessages = this.convertMessages(messages, systemPrompt); - - const requestBody: OllamaChatRequest = { - model: this.model, - messages: ollamaMessages, - stream: true, - options: { - temperature: this.temperature, - ...(this.maxTokens && { num_predict: this.maxTokens }), - }, - }; - - // Use rate limiter to prevent 429 errors - return this.rateLimiter.schedule(() => - withRetry( - async () => { - const response = await fetch(`${this.baseUrl}/api/chat`, { - method: 'POST', - headers: { - 'Content-Type': 'application/json', - ...(this.apiKey && { 'Authorization': `Bearer ${this.apiKey}` }), - }, - body: JSON.stringify(requestBody), - }); - - if (!response.ok) { - throw new Error(`Ollama API request failed: ${response.status} ${response.statusText}`); - } - - if (!response.body) { - throw new Error('Response body is undefined'); - } - - const reader = response.body.getReader(); - const decoder = new TextDecoder(); - let fullText = ''; - let thinkingText = ''; - let streamedContentChars = 0; - let streamedThinkingChars = 0; - let inputTokens: number | undefined; - let outputTokens: number | undefined; - let stopReason: string | undefined; - const nativeToolCalls: ToolCall[] = []; - const rawChunks: OllamaChatResponse[] = []; - - // Process streamed chunks - while (true) { - const { done, value } = await reader.read(); - if (done) break; - - const chunk = decoder.decode(value); - const lines = chunk.split('\n').filter(line => line.trim()); - - for (const line of lines) { - try { - const data: OllamaChatResponse = JSON.parse(line); - rawChunks.push(data); - - if (data.message?.content) { - const content = data.message.content; - fullText += content; - if (content) { - streamedContentChars += content.length; - if (onChunk) onChunk(content); - } - } - - if (data.message?.thinking) { - thinkingText += data.message.thinking; - if (onReasoningChunk) { - streamedThinkingChars += data.message.thinking.length; - onReasoningChunk(data.message.thinking); - } - } - - // Capture native tool calls from Ollama API - if (data.message?.tool_calls && data.message.tool_calls.length > 0) { - for (const tc of data.message.tool_calls) { - nativeToolCalls.push({ - id: `ollama_${Date.now()}_${nativeToolCalls.length}`, - name: this.normalizeToolName(tc.function.name), - input: tc.function.arguments, - }); - } - } - - // Capture token counts and stop reason from final chunk - if (data.done) { - inputTokens = data.prompt_eval_count; - outputTokens = data.eval_count; - stopReason = data.done_reason; - } - } catch { - // Not valid JSON, skip - continue; - } - } - } - - // Extract thinking content from tags (used by qwen3:thinking and similar models) - const { content: thinkingCleanedContent, thinking: tagThinking } = this.extractThinkingContent(fullText); - const combinedThinking = [thinkingText, tagThinking].filter(Boolean).join('\n'); - const hasContent = thinkingCleanedContent.trim().length > 0; - const useFallbackContent = !hasContent && combinedThinking.length > 0; - const finalContent = useFallbackContent ? combinedThinking : thinkingCleanedContent; - const reasoningContent = combinedThinking || undefined; - // If no non-thinking content, also look for tools in thinking content - const toolExtractionText = thinkingCleanedContent || combinedThinking; - - if (streamedContentChars === 0 && finalContent && onChunk && streamedThinkingChars === 0) { - onChunk(finalContent); - } - - // Use native tool calls if available, otherwise extract from text - let toolCalls: ToolCall[] = nativeToolCalls; - if (toolCalls.length === 0 && tools && tools.length > 0) { - toolCalls = this.extractToolCalls(toolExtractionText, tools); - } - - const cleanedContent = this.maybeCleanHallucinatedTraces(finalContent, toolCalls); - - return createProviderResponse({ - content: cleanedContent, - toolCalls, - stopReason: stopReason || 'stop', - reasoningContent, - inputTokens, - outputTokens, - rawResponse: { stream: true, chunks: rawChunks }, - }); - }, - { - ...this.retryOptions, - onRetry: this.retryCallback, - } - ) - ); - } - - async listModels(): Promise { - try { - const response = await fetch(`${this.baseUrl}/api/tags`, { - headers: this.apiKey ? { 'Authorization': `Bearer ${this.apiKey}` } : undefined, - }); - if (!response.ok) { - return []; - } - - const data = await response.json(); - return (data.models || []).map((m: { name: string }) => { - const nameLower = m.name.toLowerCase(); - const isVisionModel = nameLower.includes('llava') || - nameLower.includes('vision') || - nameLower.includes('bakllava'); - - return { - id: m.name, - name: m.name, - provider: 'Ollama', - capabilities: { - vision: isVisionModel, - toolUse: true, // Assume true for local models - }, - pricing: { - input: 0, - output: 0, // Local inference is free - }, - }; - }); - } catch { - // Ollama not running or not accessible - return []; - } - } - - /** - * Normalize tool name by stripping common prefixes and mapping aliases. - * Models trained on MCP or other tool frameworks may prefix tool names - * with things like "repo.", "repo_browser.", "mcp.", etc. - * Some models also use alternative tool names like "run_git" for "bash". - */ - private normalizeToolName(name: string): string { - // Common prefixes from MCP servers and other tool frameworks - const prefixes = [ - 'repo_browser.', - 'repo.', - 'mcp.', - 'tools.', - 'codi.', - ]; - - let normalized = name; - for (const prefix of prefixes) { - if (normalized.toLowerCase().startsWith(prefix)) { - normalized = normalized.slice(prefix.length); - break; // Only strip one prefix - } - } - - // Tool aliases - map alternative names to actual tool names - const aliases: Record = { - 'run_git': 'bash', - 'run_command': 'bash', - 'execute': 'bash', - 'shell': 'bash', - 'run_shell': 'bash', - 'exec': 'bash', - 'terminal': 'bash', - 'read': 'read_file', - 'write': 'write_file', - 'edit': 'edit_file', - 'search': 'grep', - 'find': 'glob', - 'ls': 'list_directory', - 'dir': 'list_directory', - }; - - const lowerNormalized = normalized.toLowerCase(); - if (aliases[lowerNormalized]) { - return aliases[lowerNormalized]; - } - - return normalized; - } - - /** - * Extract tool calls from response content. - * Looks for various formats that models use for tool calls. - */ - private extractToolCalls(content: string, tools: ToolDefinition[]): ToolCall[] { - const toolCalls: ToolCall[] = []; - const resolveToolName = (requestedName: string): string | null => { - const match = findBestToolMatch(requestedName, tools, DEFAULT_FALLBACK_CONFIG); - if (match.exactMatch) return requestedName; - if (match.shouldAutoCorrect && match.matchedName) return match.matchedName; - return null; - }; - - // Pattern 1: JSON in code blocks - most reliable - const codeBlockPattern = /```(?:json)?\s*([\s\S]*?)```/g; - let match; - - while ((match = codeBlockPattern.exec(content)) !== null) { - const jsonContent = match[1].trim(); - const extracted = this.tryParseToolCall(jsonContent, resolveToolName); - if (extracted) { - toolCalls.push(extracted); - } - } - - // If we found tool calls in code blocks, return them - if (toolCalls.length > 0) { - return toolCalls; - } - - // Pattern 2: Function-call style in brackets [tool_name(param="value", param2=value)] - // Used by models like qwen3-coder. Also handles prefixed names like [repo.bash(...)] - // We use a simpler regex to find potential starts, then parse properly to handle nested parens - const funcCallStartPattern = /\[([a-z_][a-z0-9_.]*)\(/gi; - - while ((match = funcCallStartPattern.exec(content)) !== null) { - const rawToolName = match[1]; - const normalizedName = this.normalizeToolName(rawToolName); - const startIndex = match.index + match[0].length; - - // Extract the arguments by finding the matching closing bracket - const argsString = this.extractBalancedParenContent(content, startIndex); - if (argsString === null) continue; - - // Verify it ends with )] - const endIndex = startIndex + argsString.length; - if (content[endIndex] !== ')' || content[endIndex + 1] !== ']') continue; - - const resolvedName = resolveToolName(normalizedName); - if (resolvedName) { - const args = this.parseFunctionCallArgs(argsString); - toolCalls.push({ - id: `extracted_${Date.now()}_${Math.random().toString(36).substring(2, 9)}`, - name: resolvedName, - input: args, - }); - } - - // Move past this match to avoid re-matching - funcCallStartPattern.lastIndex = endIndex + 2; - } - - if (toolCalls.length > 0) { - return toolCalls; - } - - // Pattern 3: [Calling tool_name]: {json} format - // Used by some models that simulate agent traces. We extract the call but ignore - // any "[Result from ...]" which are hallucinated results. - const callingPattern = /\[Calling\s+([a-z_][a-z0-9_]*)\]\s*:\s*(\{[^}]*\})/gi; - - while ((match = callingPattern.exec(content)) !== null) { - const rawToolName = match[1]; - const normalizedName = this.normalizeToolName(rawToolName); - const jsonArgs = match[2]; - - const resolvedName = resolveToolName(normalizedName); - if (resolvedName) { - try { - const args = JSON.parse(jsonArgs); - toolCalls.push({ - id: `extracted_${Date.now()}_${Math.random().toString(36).substring(2, 9)}`, - name: resolvedName, - input: args, - }); - } catch { - // Invalid JSON, skip - } - } - } - - if (toolCalls.length > 0) { - return toolCalls; - } - - // Pattern 3: Look for JSON objects with "name" field - // This pattern handles nested braces properly - const jsonPattern = /\{(?:[^{}]|\{(?:[^{}]|\{[^{}]*\})*\})*\}/g; - - while ((match = jsonPattern.exec(content)) !== null) { - const extracted = this.tryParseToolCall(match[0], resolveToolName); - if (extracted) { - toolCalls.push(extracted); - } - } - - return toolCalls; - } - - /** - * Extract content between parentheses, properly handling nested parens and quoted strings. - * Returns the content (not including the outer parens) or null if unbalanced. - */ - private extractBalancedParenContent(content: string, startIndex: number): string | null { - let depth = 1; // We start after the opening paren - let inString: string | null = null; // Track if we're inside a string ('"' or "'") - let escaped = false; - - for (let i = startIndex; i < content.length; i++) { - const char = content[i]; - - if (escaped) { - escaped = false; - continue; - } - - if (char === '\\') { - escaped = true; - continue; - } - - // Handle string delimiters - if ((char === '"' || char === "'") && !inString) { - inString = char; - continue; - } - - if (char === inString) { - inString = null; - continue; - } - - // Only count parens if not in a string - if (!inString) { - if (char === '(') { - depth++; - } else if (char === ')') { - depth--; - if (depth === 0) { - return content.slice(startIndex, i); - } - } - } - } - - return null; // Unbalanced - } - - /** - * Parse function-call style arguments like: path=".", show_hidden=true - * Handles complex quoted strings with escaped characters. - */ - private parseFunctionCallArgs(argsString: string): Record { - const args: Record = {}; - if (!argsString.trim()) return args; - - let i = 0; - while (i < argsString.length) { - // Skip whitespace - while (i < argsString.length && /\s/.test(argsString[i])) i++; - if (i >= argsString.length) break; - - // Find key (alphanumeric + underscore) - const keyStart = i; - while (i < argsString.length && /[a-z_]/i.test(argsString[i])) i++; - const key = argsString.slice(keyStart, i); - if (!key) break; - - // Skip whitespace and = - while (i < argsString.length && /\s/.test(argsString[i])) i++; - if (argsString[i] !== '=') break; - i++; // Skip = - while (i < argsString.length && /\s/.test(argsString[i])) i++; - - // Parse value - let value: string; - const quote = argsString[i]; - - if (quote === '"' || quote === "'") { - // Quoted string - find matching end quote, handling escapes - i++; // Skip opening quote - const valueStart = i; - let escaped = false; - - while (i < argsString.length) { - if (escaped) { - escaped = false; - i++; - continue; - } - if (argsString[i] === '\\') { - escaped = true; - i++; - continue; - } - if (argsString[i] === quote) { - break; - } - i++; - } - - value = argsString.slice(valueStart, i); - // Unescape basic escape sequences - value = value.replace(/\\n/g, '\n').replace(/\\t/g, '\t').replace(/\\"/g, '"').replace(/\\'/g, "'").replace(/\\\\/g, '\\'); - i++; // Skip closing quote - } else { - // Unquoted value - read until comma or end - const valueStart = i; - while (i < argsString.length && argsString[i] !== ',' && !/\s/.test(argsString[i])) i++; - value = argsString.slice(valueStart, i); - } - - // Convert value to appropriate type - if (value === 'true') { - args[key] = true; - } else if (value === 'false') { - args[key] = false; - } else if (value === 'null') { - args[key] = null; - } else if (!isNaN(Number(value)) && value !== '' && !/^0[0-9]/.test(value)) { - args[key] = Number(value); - } else { - args[key] = value; - } - - // Skip comma - while (i < argsString.length && /[\s,]/.test(argsString[i])) i++; - } - - return args; - } - - /** - * Try to parse a JSON string as a tool call. - */ - private tryParseToolCall( - jsonString: string, - resolveToolName: (requestedName: string) => string | null - ): ToolCall | null { - try { - const parsed = JSON.parse(jsonString); - - // Check if it has a valid tool name (normalize to strip prefixes) - if (parsed.name) { - const normalizedName = this.normalizeToolName(parsed.name); - const resolvedName = resolveToolName(normalizedName); - if (resolvedName) { - return { - id: `extracted_${Date.now()}_${Math.random().toString(36).substring(2, 9)}`, - name: resolvedName, - input: parsed.arguments || parsed.input || parsed.parameters || {}, - }; - } - } - } catch { - // Not valid JSON - } - return null; - } - - /** - * Extract thinking/reasoning content from tags. - * Used by models like qwen3:thinking that wrap reasoning in XML-style tags. - */ - private extractThinkingContent(content: string): { content: string; thinking: string } { - // Match ... or ... tags - const thinkPattern = /([\s\S]*?)<\/think(?:ing)?>/gi; - let thinking = ''; - let cleanedContent = content; - - let match; - while ((match = thinkPattern.exec(content)) !== null) { - thinking += (thinking ? '\n' : '') + match[1].trim(); - } - - // Remove thinking tags from content - if (thinking) { - cleanedContent = content.replace(thinkPattern, '').trim(); - } - - return { content: cleanedContent, thinking }; - } - - private maybeCleanHallucinatedTraces(content: string, toolCalls: ToolCall[]): string { - if (!this.config.cleanHallucinatedTraces || toolCalls.length === 0) { - return content; - } - - const matches = content.match(this.getHallucinatedTracePattern()) || []; - const cleanedContent = this.cleanHallucinatedTraces(content); - if (cleanedContent !== content) { - if (logger.isLevelEnabled(LogLevel.VERBOSE) && matches.length > 0) { - const joined = matches.join('\n'); - const clipped = joined.length > 2000 - ? `${joined.slice(0, 2000)}\n... [truncated ${joined.length - 2000} chars]` - : joined; - logger.verbose(`[ollama-cloud] Stripped hallucinated traces:\n${clipped}`); - } - logger.warn('Ollama Cloud: cleaned hallucinated tool traces from model output.'); - } - - return cleanedContent; - } - - /** - * Clean hallucinated agent trace patterns from content. - * Some models output fake "[Calling tool]: {json}[Result from tool]: result" traces. - * This should be called AFTER extractToolCalls to clean up the display content. - */ - private cleanHallucinatedTraces(content: string): string { - // Pattern: [Calling tool_name]: {json}[Result from tool_name]: any text until next [ or end - const hallucinatedTracePattern = this.getHallucinatedTracePattern(); - let cleanedContent = content.replace(hallucinatedTracePattern, '').trim(); - - // Clean up multiple newlines - cleanedContent = cleanedContent.replace(/\n{3,}/g, '\n\n').trim(); - - return cleanedContent; - } - - private getHallucinatedTracePattern(): RegExp { - return /\[Calling\s+[a-z_][a-z0-9_]*\]\s*:\s*\{[^}]*\}\s*(?:\[Result from\s+[a-z_][a-z0-9_]*\]\s*:\s*[^\[]*)?/gi; - } - - /** - * Pull a model if it's not already available. - */ - async pullModel(modelName: string): Promise { - const response = await fetch(`${this.baseUrl}/api/pull`, { - method: 'POST', - headers: { - 'Content-Type': 'application/json', - ...(this.apiKey && { 'Authorization': `Bearer ${this.apiKey}` }), - }, - body: JSON.stringify({ - name: modelName, - }), - }); - - if (!response.ok) { - throw new Error(`Failed to pull model ${modelName}: ${response.statusText}`); - } - - // Wait for the pull to complete - await response.json(); - } - - /** - * Check if Ollama is running and accessible. - */ - async healthCheck(): Promise { - try { - const response = await fetch(`${this.baseUrl}/api/tags`, { - headers: this.apiKey ? { 'Authorization': `Bearer ${this.apiKey}` } : undefined, - }); - return response.ok; - } catch { - return false; - } - } - - /** - * Fetch and cache the context window from Ollama's /api/show endpoint. - * Called lazily on first API request. - */ - private async fetchContextWindow(): Promise { - if (this.cachedContextWindow !== null) return; - - const info = await getOllamaModelInfo(this.model, this.baseUrl); - if (info?.contextWindow) { - this.cachedContextWindow = info.contextWindow; - logger.debug(`Ollama ${this.model}: context window = ${this.cachedContextWindow}`); - } - } - - /** - * Ensure context window is fetched (for use before API calls). - */ - async ensureContextWindow(): Promise { - if (this.cachedContextWindow !== null) return; - - // Avoid multiple concurrent fetches - if (!this.contextWindowPromise) { - this.contextWindowPromise = this.fetchContextWindow(); - } - await this.contextWindowPromise; - } - - /** - * Get the context window for the current model. - * Returns cached value from /api/show, or default if not yet fetched. - * Applies model-specific overrides for models with wrong API values. - */ - override getContextWindow(): number { - const rawContextWindow = this.cachedContextWindow ?? 128000; - - // Apply model-specific override for models with incorrect API values - for (const [pattern, correctWindow] of Object.entries(MODEL_CONTEXT_OVERRIDES)) { - if (this.model.includes(pattern)) { - if (rawContextWindow !== correctWindow) { - logger.warn(`Model ${this.model} reports ${rawContextWindow} context window, using override: ${correctWindow}`); - } - return correctWindow; - } - } - - return rawContextWindow; - } -} diff --git a/src/rag/embeddings/index.ts b/src/rag/embeddings/index.ts index d19d4ed..eb21114 100644 --- a/src/rag/embeddings/index.ts +++ b/src/rag/embeddings/index.ts @@ -31,7 +31,6 @@ export function createEmbeddingProviderFromModelDef( return new OpenAIEmbeddingProvider(modelDef.model); case 'ollama': - case 'ollama-cloud': return new OllamaEmbeddingProvider( modelDef.model, modelDef.baseUrl || 'http://localhost:11434' @@ -40,7 +39,7 @@ export function createEmbeddingProviderFromModelDef( default: throw new Error( `Unsupported embedding provider: ${modelDef.provider}. ` + - `Supported providers: openai, ollama, ollama-cloud` + `Supported providers: openai, ollama` ); } } diff --git a/src/symbol-index/service.ts b/src/symbol-index/service.ts index 881fc03..57cbf7a 100644 --- a/src/symbol-index/service.ts +++ b/src/symbol-index/service.ts @@ -24,6 +24,7 @@ import type { DependencyResult, InheritanceResult, } from './types.js'; +import { hasExtendsMetadata } from './types.js'; // Import extractors from model-map import { RegexSymbolExtractor } from '../model-map/symbols/regex-extractor.js'; @@ -1076,8 +1077,8 @@ export class SymbolIndexService { // For ancestors, look at the 'extends' metadata if (direction === 'ancestors' || direction === 'both') { for (const sym of symbols) { - if (sym.metadata && Array.isArray((sym.metadata as any).extends)) { - for (const ext of (sym.metadata as any).extends) { + if (hasExtendsMetadata(sym.metadata)) { + for (const ext of sym.metadata.extends) { // Find the extended class/interface const extSymbols = this.db.findSymbols(ext, { exact: true, limit: 1 }); if (extSymbols.length > 0) { @@ -1098,16 +1099,14 @@ export class SymbolIndexService { if (direction === 'descendants' || direction === 'both') { const allSymbols = this.db.findSymbols('', { limit: 1000 }); // Get all symbols for (const sym of allSymbols) { - if (sym.metadata && Array.isArray((sym.metadata as any).extends)) { - if ((sym.metadata as any).extends.includes(name)) { - results.push({ - name: sym.name, - kind: sym.kind as 'class' | 'interface', - file: sym.file, - line: sym.line, - direction: sym.kind === 'interface' ? 'implemented-by' : 'extended-by', - }); - } + if (hasExtendsMetadata(sym.metadata) && sym.metadata.extends.includes(name)) { + results.push({ + name: sym.name, + kind: sym.kind as 'class' | 'interface', + file: sym.file, + line: sym.line, + direction: sym.kind === 'interface' ? 'implemented-by' : 'extended-by', + }); } } } diff --git a/src/symbol-index/types.ts b/src/symbol-index/types.ts index 5047335..9994ac9 100644 --- a/src/symbol-index/types.ts +++ b/src/symbol-index/types.ts @@ -191,3 +191,25 @@ export interface IndexStats { lastUpdate: string; indexSizeBytes: number; } + +/** + * Symbol metadata with inheritance information. + */ +export interface SymbolMetadataWithExtends { + extends?: string[]; + implements?: string[]; + [key: string]: unknown; +} + +/** + * Type guard to check if symbol metadata has extends information. + */ +export function hasExtendsMetadata( + metadata: Record | undefined +): metadata is SymbolMetadataWithExtends & { extends: string[] } { + return ( + metadata !== undefined && + 'extends' in metadata && + Array.isArray(metadata.extends) + ); +} diff --git a/src/workflow/steps/conditional.ts b/src/workflow/steps/conditional.ts index 26ce886..a56fb78 100644 --- a/src/workflow/steps/conditional.ts +++ b/src/workflow/steps/conditional.ts @@ -1,22 +1,26 @@ // Copyright 2026 Layne Penney // SPDX-License-Identifier: AGPL-3.0-or-later +import type { Agent } from '../../agent.js'; import { - WorkflowStep, WorkflowState, ConditionalStep } from '../types.js'; -import { checkFileExists } from './file-exists.js'; + +/** + * Context type for condition evaluation + */ +type ConditionContext = Record; /** * Evaluate conditional expressions safely */ -export function evaluateCondition(condition: string, context: any): boolean { +export function evaluateCondition(condition: string, context: ConditionContext): boolean { // Remove whitespace and normalize const normalizedCondition = condition.trim().toLowerCase(); - + // Simple condition evaluation - const simpleConditions: Record boolean> = { + const simpleConditions: Record boolean> = { 'true': () => true, 'false': () => false, 'approved': (ctx) => ctx?.approved === true, @@ -88,16 +92,23 @@ export function evaluateCondition(condition: string, context: any): boolean { return !!context[normalizedCondition]; } +interface ConditionalResult { + condition: string; + result: boolean; + nextStep: string | null; + contextUsed: string[]; +} + /** * Execute a conditional step */ export async function executeConditionalStep( step: ConditionalStep, state: WorkflowState, - agent?: any -): Promise { + agent?: Agent +): Promise { // Merge state variables with additional context - const context = { + const context: ConditionContext = { ...state.variables, agentAvailable: !!agent, stepCount: state.history.length, @@ -106,7 +117,7 @@ export async function executeConditionalStep( }; const result = evaluateCondition(step.check, context); - + return { condition: step.check, result, diff --git a/src/workflow/steps/file-exists.ts b/src/workflow/steps/file-exists.ts index 845e188..59935e8 100644 --- a/src/workflow/steps/file-exists.ts +++ b/src/workflow/steps/file-exists.ts @@ -2,8 +2,8 @@ // SPDX-License-Identifier: AGPL-3.0-or-later import { promises as fs } from 'node:fs'; +import type { Agent } from '../../agent.js'; import { - WorkflowStep, WorkflowState, CheckFileExistsStep } from '../types.js'; @@ -20,21 +20,27 @@ export async function checkFileExists(filePath: string): Promise { } } +interface FileExistsResult { + filePath: string; + exists: boolean; + fileExists: boolean; // Alias for condition evaluation +} + /** * Execute a file existence check step */ export async function executeCheckFileExistsStep( step: CheckFileExistsStep, - state: WorkflowState, - agent?: any -): Promise { + _state: WorkflowState, + _agent?: Agent +): Promise { const filePath = step.file || 'test-file.txt'; const exists = await checkFileExists(filePath); - + return { filePath, exists, - fileExists: exists // Alias for condition evaluation + fileExists: exists }; } diff --git a/src/workflow/steps/index.ts b/src/workflow/steps/index.ts index 3451851..6c9a64d 100644 --- a/src/workflow/steps/index.ts +++ b/src/workflow/steps/index.ts @@ -1,62 +1,104 @@ // Copyright 2026 Layne Penney // SPDX-License-Identifier: AGPL-3.0-or-later -import type { WorkflowStep, WorkflowState, ConditionalStep, CheckFileExistsStep, LoopStep, InteractiveStep } from '../types.js'; +import type { Agent } from '../../agent.js'; +import type { BaseProvider } from '../../providers/base.js'; +import type { + WorkflowStep, + WorkflowState, + LoopStep, + InteractiveStep, + ShellActionStep, + AiPromptActionStep, + GitActionStep, + PrActionStep, +} from '../types.js'; +import { + isConditionalStep, + isCheckFileExistsStep, + isLoopStep, + isInteractiveStep, + isShellStep, + isAiPromptStep, + isPrActionStep, + isGitActionStep, +} from '../types.js'; import { executeSwitchModelStep, validateSwitchModelStep } from './switch-model.js'; import { executeConditionalStep, validateConditionalStep } from './conditional.js'; import { executeCheckFileExistsStep, validateCheckFileExistsStep } from './file-exists.js'; import { executeLoopStep, validateLoopStep } from './loop.js'; import { executeInteractiveStep, validateInteractiveStep } from './interactive.js'; - import { executeShellActionStep, validateShellActionStep } from './shell.js'; import { executeAiPromptActionStep, validateAiPromptActionStep } from './ai-prompt.js'; import { executeGitActionStep, validateGitActionStep } from './git.js'; import { executePrActionStep, validatePrActionStep } from './pr.js'; -// Type imports for proper casting -import type { ShellActionStep, AiPromptActionStep, GitActionStep, PrActionStep } from '../types.js'; /** * Execute any workflow step */ export async function executeStep( step: WorkflowStep, state: WorkflowState, - agent: any, - availableModels: Map -): Promise { + agent: Agent, + availableModels: Map +): Promise { switch (step.action) { case 'switch-model': return executeSwitchModelStep(step, state, agent, availableModels); - + case 'conditional': - return executeConditionalStep(step as ConditionalStep, state, agent); - + if (isConditionalStep(step)) { + return executeConditionalStep(step, state, agent); + } + throw new Error(`Invalid conditional step: ${step.id}`); + case 'check-file-exists': - return executeCheckFileExistsStep(step as CheckFileExistsStep, state, agent); - + if (isCheckFileExistsStep(step)) { + return executeCheckFileExistsStep(step, state, agent); + } + throw new Error(`Invalid check-file-exists step: ${step.id}`); + case 'loop': - return executeLoopStep(step as LoopStep, state, agent); - + if (isLoopStep(step)) { + return executeLoopStep(step, state, agent); + } + throw new Error(`Invalid loop step: ${step.id}`); + case 'interactive': - return executeInteractiveStep(step as InteractiveStep, state, agent); - + if (isInteractiveStep(step)) { + return executeInteractiveStep(step, state, agent); + } + throw new Error(`Invalid interactive step: ${step.id}`); + case 'shell': - return executeShellActionStep(step as ShellActionStep, state, agent); - + if (isShellStep(step)) { + return executeShellActionStep(step, state, agent); + } + throw new Error(`Invalid shell step: ${step.id}`); + case 'ai-prompt': - return executeAiPromptActionStep(step as AiPromptActionStep, state, agent); - + if (isAiPromptStep(step)) { + return executeAiPromptActionStep(step, state, agent); + } + throw new Error(`Invalid ai-prompt step: ${step.id}`); + case 'create-pr': case 'review-pr': case 'merge-pr': - return executePrActionStep(step as PrActionStep, state, agent); - + if (isPrActionStep(step)) { + return executePrActionStep(step, state, agent); + } + throw new Error(`Invalid PR action step: ${step.id}`); + case 'commit': case 'push': case 'pull': case 'sync': - return executeGitActionStep(step as GitActionStep, state, agent); - + if (isGitActionStep(step)) { + return executeGitActionStep(step, state, agent); + } + throw new Error(`Invalid git action step: ${step.id}`); + default: throw new Error(`Unknown action: ${step.action}`); } @@ -66,56 +108,74 @@ export async function executeStep( * Validate a workflow step */ export function validateStep(step: WorkflowStep): void { + // Basic validation for all steps + if (!step.id || typeof step.id !== 'string') { + throw new Error('Step must have an id'); + } + if (!step.action || typeof step.action !== 'string') { + throw new Error('Step must have an action'); + } + switch (step.action) { case 'switch-model': validateSwitchModelStep(step); break; - + case 'conditional': - validateConditionalStep(step as ConditionalStep); + if (isConditionalStep(step)) { + validateConditionalStep(step); + } break; - + case 'check-file-exists': - validateCheckFileExistsStep(step as CheckFileExistsStep); + if (isCheckFileExistsStep(step)) { + validateCheckFileExistsStep(step); + } break; - + case 'loop': - validateLoopStep(step as LoopStep); + if (isLoopStep(step)) { + validateLoopStep(step); + } + break; + + case 'interactive': + if (isInteractiveStep(step)) { + validateInteractiveStep(step); + } break; - + case 'shell': - validateShellActionStep(step as ShellActionStep); + if (isShellStep(step)) { + validateShellActionStep(step); + } break; - + case 'ai-prompt': - validateAiPromptActionStep(step as AiPromptActionStep); + if (isAiPromptStep(step)) { + validateAiPromptActionStep(step); + } break; - + case 'create-pr': case 'review-pr': case 'merge-pr': - validatePrActionStep(step as PrActionStep); + if (isPrActionStep(step)) { + validatePrActionStep(step); + } break; - + case 'commit': case 'push': case 'pull': case 'sync': - validateGitActionStep(step as GitActionStep); - break; - - case 'interactive': - validateInteractiveStep(step as InteractiveStep); + if (isGitActionStep(step)) { + validateGitActionStep(step); + } break; - - // Add validation for other step types as needed + default: - // Basic validation for all steps - if (!step.id || typeof step.id !== 'string') { - throw new Error('Step must have an id'); - } - if (!step.action || typeof step.action !== 'string') { - throw new Error('Step must have an action'); - } - } + // Unknown action types pass through with basic validation only + break; } +} diff --git a/src/workflow/steps/switch-model.ts b/src/workflow/steps/switch-model.ts index ee80b19..70b6096 100644 --- a/src/workflow/steps/switch-model.ts +++ b/src/workflow/steps/switch-model.ts @@ -1,20 +1,41 @@ // Copyright 2026 Layne Penney // SPDX-License-Identifier: AGPL-3.0-or-later -import { WorkflowStep, WorkflowState, WorkflowError } from '../types.js'; +import type { Agent } from '../../agent.js'; +import { WorkflowStep, WorkflowState, WorkflowError, SwitchModelStep, isSwitchModelStep } from '../types.js'; import { createProvider, type BaseProvider } from '../../providers/index.js'; +interface SwitchModelResult { + success: boolean; + previousProvider: { + name: string; + model: string; + }; + newProvider: { + name: string; + model: string; + }; + contextPreserved: boolean; +} + /** * Executes switch-model steps */ export async function executeSwitchModelStep( step: WorkflowStep, - state: WorkflowState, - agent: any, + _state: WorkflowState, + agent: Agent, availableModels: Map -): Promise { - const targetModel = (step as any).model; - +): Promise { + if (!isSwitchModelStep(step)) { + throw new WorkflowError( + `Step ${step.id} is not a valid switch-model step`, + step.id + ); + } + + const targetModel = step.model; + if (!targetModel || typeof targetModel !== 'string') { throw new WorkflowError( `Switch-model step ${step.id} must specify a model`, @@ -34,7 +55,7 @@ export async function executeSwitchModelStep( : ['', targetModel]; // Default to current provider if no provider specified - const effectiveProvider = providerName || agent.provider.getName(); + const effectiveProvider = providerName || agent.getProvider().getName(); const effectiveModel = modelName || targetModel; provider = createProvider({ @@ -53,7 +74,7 @@ export async function executeSwitchModelStep( } // Save current provider context before switching - const previousProvider = agent.provider; + const previousProvider = agent.getProvider(); const previousModel = previousProvider.getModel(); // Switch to the new provider @@ -77,7 +98,14 @@ export async function executeSwitchModelStep( * Validates that a switch-model step has required properties */ export function validateSwitchModelStep(step: WorkflowStep): void { - if (!(step as any).model || typeof (step as any).model !== 'string') { + if (!isSwitchModelStep(step)) { + throw new WorkflowError( + `Step ${step.id} is not a valid switch-model step`, + step.id + ); + } + + if (!step.model || typeof step.model !== 'string') { throw new WorkflowError( `Switch-model step ${step.id} must specify a model`, step.id diff --git a/src/workflow/types.ts b/src/workflow/types.ts index 5cdd2b5..b7d9c89 100644 --- a/src/workflow/types.ts +++ b/src/workflow/types.ts @@ -11,22 +11,47 @@ export interface Workflow { version?: string; interactive?: boolean; persistent?: boolean; - variables?: Record; + variables?: Record; steps: WorkflowStep[]; } -export interface WorkflowStep { +/** + * Base properties common to all workflow steps. + */ +export interface BaseWorkflowStep { id: string; - action: string; description?: string; - // Step-specific configuration - [key: string]: any; +} + +/** + * Discriminated union of all workflow step types. + * Use type guards (isShellStep, isSwitchModelStep, etc.) to narrow the type. + */ +export type WorkflowStep = + | ShellActionStep + | SwitchModelStep + | ConditionalStep + | LoopStep + | InteractiveStep + | CheckFileExistsStep + | AiPromptActionStep + | PrActionStep + | GitActionStep + | GenericStep; + +/** + * Generic step for unknown/extensible actions. + * Used as a fallback when action type is not recognized. + */ +export interface GenericStep extends BaseWorkflowStep { + action: string; + [key: string]: unknown; } export interface WorkflowState { name: string; currentStep: string; - variables: Record; + variables: Record; history: StepExecution[]; iterationCount: number; paused: boolean; @@ -38,31 +63,31 @@ export interface WorkflowState { export interface StepExecution { step: string; status: 'pending' | 'running' | 'completed' | 'failed'; - result?: any; + result?: unknown; timestamp: string; } -// Step-specific types -export interface SwitchModelStep extends WorkflowStep { +// Step-specific types - each extends BaseWorkflowStep and has a literal action +export interface SwitchModelStep extends BaseWorkflowStep { action: 'switch-model'; model: string; } -export interface ConditionalStep extends WorkflowStep { +export interface ConditionalStep extends BaseWorkflowStep { action: 'conditional'; check: string; onTrue: string; onFalse?: string; } -export interface LoopStep extends WorkflowStep { +export interface LoopStep extends BaseWorkflowStep { action: 'loop'; to: string; condition: string; maxIterations?: number; } -export interface InteractiveStep extends WorkflowStep { +export interface InteractiveStep extends BaseWorkflowStep { action: 'interactive'; prompt: string; inputType?: 'text' | 'password' | 'confirm' | 'choice' | 'multiline'; @@ -72,32 +97,70 @@ export interface InteractiveStep extends WorkflowStep { choices?: string[]; } -// Action types -export interface CheckFileExistsStep extends WorkflowStep { +export interface CheckFileExistsStep extends BaseWorkflowStep { action: 'check-file-exists'; file?: string; } -export interface ShellActionStep extends WorkflowStep { +export interface ShellActionStep extends BaseWorkflowStep { action: 'shell'; command: string; } -export interface AiPromptActionStep extends WorkflowStep { + +export interface AiPromptActionStep extends BaseWorkflowStep { action: 'ai-prompt'; prompt: string; model?: string; } -export interface PrActionStep extends WorkflowStep { +export interface PrActionStep extends BaseWorkflowStep { action: 'create-pr' | 'review-pr' | 'merge-pr'; title?: string; body?: string; base?: string; } -export interface GitActionStep extends WorkflowStep { +export interface GitActionStep extends BaseWorkflowStep { action: 'commit' | 'push' | 'pull' | 'sync'; message?: string; + base?: string; +} + +// Type guards for step types +export function isShellStep(step: WorkflowStep): step is ShellActionStep { + return step.action === 'shell'; +} + +export function isSwitchModelStep(step: WorkflowStep): step is SwitchModelStep { + return step.action === 'switch-model'; +} + +export function isConditionalStep(step: WorkflowStep): step is ConditionalStep { + return step.action === 'conditional'; +} + +export function isLoopStep(step: WorkflowStep): step is LoopStep { + return step.action === 'loop'; +} + +export function isInteractiveStep(step: WorkflowStep): step is InteractiveStep { + return step.action === 'interactive'; +} + +export function isCheckFileExistsStep(step: WorkflowStep): step is CheckFileExistsStep { + return step.action === 'check-file-exists'; +} + +export function isAiPromptStep(step: WorkflowStep): step is AiPromptActionStep { + return step.action === 'ai-prompt'; +} + +export function isPrActionStep(step: WorkflowStep): step is PrActionStep { + return step.action === 'create-pr' || step.action === 'review-pr' || step.action === 'merge-pr'; +} + +export function isGitActionStep(step: WorkflowStep): step is GitActionStep { + return step.action === 'commit' || step.action === 'push' || step.action === 'pull' || step.action === 'sync'; } // Error types diff --git a/tests/commands.e2e.test.ts b/tests/commands.e2e.test.ts index 0e2ab24..d63b7f5 100644 --- a/tests/commands.e2e.test.ts +++ b/tests/commands.e2e.test.ts @@ -365,14 +365,15 @@ describe('/compact command E2E', () => { beforeEach(() => { projectDir = createTempProjectDir(); - // Need multiple responses for the conversation before compact - // Note: After first exchange, auto-label generation makes an API call, - // so we need an extra response for that + // Need multiple responses for the conversation before compact. + // Auto-label generation and RAG/context checks may consume responses, + // so we use extra buffer responses with generic patterns for robustness. mockSession = setupMockE2E([ - textResponse('First message response.'), - textResponse('Auto label'), // For auto-label generation after first exchange - textResponse('Second message response.'), - textResponse('Third message response.'), + textResponse('Response A from mock.'), + textResponse('Response B from mock.'), + textResponse('Response C from mock.'), + textResponse('Response D from mock.'), + textResponse('Response E from mock.'), textResponse('Summary of conversation.'), // For the compact summarization ], { enableLogging: true }); }); @@ -395,14 +396,19 @@ describe('/compact command E2E', () => { await proc.waitFor(/Tips:|You:/i); // Build up some conversation history + // We use generic patterns to match any mock response (A, B, C, etc.) + // Some responses may be consumed by RAG/context checks, so we use buffer proc.writeLine('First message'); - await proc.waitFor(/First message response/i); + await proc.waitFor(/Response [A-E] from mock/i); + proc.clearOutput(); // Clear to avoid matching previous response proc.writeLine('Second message'); - await proc.waitFor(/Second message response/i); + await proc.waitFor(/Response [A-E] from mock/i); + proc.clearOutput(); proc.writeLine('Third message'); - await proc.waitFor(/Third message response/i); + await proc.waitFor(/Response [A-E] from mock/i); + proc.clearOutput(); // Compact the conversation - wait for compaction message proc.writeLine('/compact'); diff --git a/tests/helpers/process-harness.ts b/tests/helpers/process-harness.ts index 2e48e26..4667844 100644 --- a/tests/helpers/process-harness.ts +++ b/tests/helpers/process-harness.ts @@ -107,6 +107,14 @@ export class ProcessHarness { throw new Error(`Timeout waiting for pattern: ${pattern}\n\nOutput:\n${this.output}`); } + /** + * Wait for output buffer to flush. Use between sequential operations + * to prevent race conditions where responses arrive before being captured. + */ + async waitForOutputFlush(ms = 100): Promise { + await new Promise(r => setTimeout(r, ms)); + } + kill(): void { this.proc.kill('SIGTERM'); } diff --git a/tests/providers.test.ts b/tests/providers.test.ts index 4d26862..32d58ab 100644 --- a/tests/providers.test.ts +++ b/tests/providers.test.ts @@ -490,8 +490,8 @@ Step 3: Choose solution }); }); -describe('OllamaCloudProvider function-call style parsing', () => { - // Test the argument parsing function directly (mirrors implementation in ollama-cloud.ts) +describe('Function-call style parsing (old OllamaCloudProvider implementation)', () => { + // Test the argument parsing function directly (mirrors implementation that was in ollama-cloud.ts) function parseFunctionCallArgs(argsString: string): Record { const args: Record = {}; if (!argsString.trim()) return args; @@ -666,7 +666,7 @@ describe('OllamaCloudProvider function-call style parsing', () => { // Test that tool extraction works on thinking content when regular content is empty it('extracts tool call from thinking content when content is empty', () => { - // Simulate the logic in ollama-cloud.ts where toolExtractionText falls back to thinking + // Simulate the logic that was in ollama-cloud.ts where toolExtractionText falls back to thinking const thinkingCleanedContent = ''; // empty regular content const combinedThinking = 'The test is failing. Let me check the file:\n\n[read_file(path="test.ts")]'; @@ -681,7 +681,7 @@ describe('OllamaCloudProvider function-call style parsing', () => { }); describe('OllamaCloudProvider tool name normalization', () => { - // Test the normalization function directly (mirrors implementation in ollama-cloud.ts) + // Test the normalization function directly (mirrors implementation that was in ollama-cloud.ts) function normalizeToolName(name: string): string { const prefixes = [ 'repo_browser.', diff --git a/tests/rag-embeddings.test.ts b/tests/rag-embeddings.test.ts index 823fa93..df0c317 100644 --- a/tests/rag-embeddings.test.ts +++ b/tests/rag-embeddings.test.ts @@ -561,9 +561,9 @@ describe('Embedding Providers', () => { expect(provider.getModel()).toBe('nomic-embed-text'); }); - it('creates Ollama-cloud provider', () => { + it('creates Ollama provider with cloud URL', () => { const modelDef = { - provider: 'ollama-cloud' as const, + provider: 'ollama' as const, model: 'glm-4.7:cloud', baseUrl: 'https://api.ollama.ai', }; diff --git a/tests/workflow-steps.test.ts b/tests/workflow-steps.test.ts index b5d563c..40ff4da 100644 --- a/tests/workflow-steps.test.ts +++ b/tests/workflow-steps.test.ts @@ -7,13 +7,17 @@ import { WorkflowState } from '../src/workflow/types.js'; // Mock Agent class class MockAgent { - provider = { + private _provider = { getName: () => 'ollama', getModel: () => 'llama3.2' }; - - setProvider(newProvider: any) { - this.provider = { + + getProvider() { + return this._provider; + } + + setProvider(newProvider: { type: string; model: string; getName: () => string; getModel: () => string }) { + this._provider = { getName: () => newProvider.type, getModel: () => newProvider.model }; diff --git a/workflows/ai-generated-1769396002530-workflow.yaml b/workflows/ai-generated-1769396002530-workflow.yaml deleted file mode 100644 index 1d6caf0..0000000 --- a/workflows/ai-generated-1769396002530-workflow.yaml +++ /dev/null @@ -1,16 +0,0 @@ -name: workflow-1769396002530 -description: "Generated from: create a testing workflow" - -steps: - - id: shell-welcome - action: shell - description: "Welcome message" - command: "echo \"Starting AI-generated workflow\"" - - id: prompt-analyze - action: ai-prompt - description: "Analyze the task" - prompt: "Please analyze and help me with: create a testing workflow" - - id: shell-complete - action: shell - description: "Completion message" - command: "echo \"Workflow completed successfully\"" diff --git a/workflows/ai-generated-1769396036338-workflow.yaml b/workflows/ai-generated-1769396036338-workflow.yaml deleted file mode 100644 index c79564b..0000000 --- a/workflows/ai-generated-1769396036338-workflow.yaml +++ /dev/null @@ -1,16 +0,0 @@ -name: workflow-1769396036338 -description: "Generated from: create a testing workflow" - -steps: - - id: shell-welcome - action: shell - description: "Welcome message" - command: "echo \"Starting AI-generated workflow\"" - - id: prompt-analyze - action: ai-prompt - description: "Analyze the task" - prompt: "Please analyze and help me with: create a testing workflow" - - id: shell-complete - action: shell - description: "Completion message" - command: "echo \"Workflow completed successfully\"" diff --git a/workflows/ai-generated-1769396109004-workflow.yaml b/workflows/ai-generated-1769396109004-workflow.yaml deleted file mode 100644 index ed8c6eb..0000000 --- a/workflows/ai-generated-1769396109004-workflow.yaml +++ /dev/null @@ -1,16 +0,0 @@ -name: workflow-1769396109004 -description: "Generated from: create a testing workflow" - -steps: - - id: shell-welcome - action: shell - description: "Welcome message" - command: "echo \"Starting AI-generated workflow\"" - - id: prompt-analyze - action: ai-prompt - description: "Analyze the task" - prompt: "Please analyze and help me with: create a testing workflow" - - id: shell-complete - action: shell - description: "Completion message" - command: "echo \"Workflow completed successfully\"" diff --git a/workflows/ai-generated-1769396600346-workflow.yaml b/workflows/ai-generated-1769396600346-workflow.yaml deleted file mode 100644 index 02d8fe1..0000000 --- a/workflows/ai-generated-1769396600346-workflow.yaml +++ /dev/null @@ -1,16 +0,0 @@ -name: workflow-1769396600346 -description: "Generated from: create a testing workflow" - -steps: - - id: shell-welcome - action: shell - description: "Welcome message" - command: "echo \"Starting AI-generated workflow\"" - - id: prompt-analyze - action: ai-prompt - description: "Analyze the task" - prompt: "Please analyze and help me with: create a testing workflow" - - id: shell-complete - action: shell - description: "Completion message" - command: "echo \"Workflow completed successfully\"" diff --git a/workflows/ai-generated-1769396626430-workflow.yaml b/workflows/ai-generated-1769396626430-workflow.yaml deleted file mode 100644 index a5890bd..0000000 --- a/workflows/ai-generated-1769396626430-workflow.yaml +++ /dev/null @@ -1,16 +0,0 @@ -name: workflow-1769396626430 -description: "Generated from: create a testing workflow" - -steps: - - id: shell-welcome - action: shell - description: "Welcome message" - command: "echo \"Starting AI-generated workflow\"" - - id: prompt-analyze - action: ai-prompt - description: "Analyze the task" - prompt: "Please analyze and help me with: create a testing workflow" - - id: shell-complete - action: shell - description: "Completion message" - command: "echo \"Workflow completed successfully\"" diff --git a/workflows/ai-generated-1769398892700-workflow.yaml b/workflows/ai-generated-1769398892700-workflow.yaml deleted file mode 100644 index a90d8a7..0000000 --- a/workflows/ai-generated-1769398892700-workflow.yaml +++ /dev/null @@ -1,16 +0,0 @@ -name: workflow-1769398892700 -description: "Generated from: create a testing workflow" - -steps: - - id: shell-welcome - action: shell - description: "Welcome message" - command: "echo \"Starting AI-generated workflow\"" - - id: prompt-analyze - action: ai-prompt - description: "Analyze the task" - prompt: "Please analyze and help me with: create a testing workflow" - - id: shell-complete - action: shell - description: "Completion message" - command: "echo \"Workflow completed successfully\"" diff --git a/workflows/ai-generated-1769399335666-workflow.yaml b/workflows/ai-generated-1769399335666-workflow.yaml deleted file mode 100644 index 12c669b..0000000 --- a/workflows/ai-generated-1769399335666-workflow.yaml +++ /dev/null @@ -1,16 +0,0 @@ -name: workflow-1769399335666 -description: "Generated from: create a testing workflow" - -steps: - - id: shell-welcome - action: shell - description: "Welcome message" - command: "echo \"Starting AI-generated workflow\"" - - id: prompt-analyze - action: ai-prompt - description: "Analyze the task" - prompt: "Please analyze and help me with: create a testing workflow" - - id: shell-complete - action: shell - description: "Completion message" - command: "echo \"Workflow completed successfully\"" diff --git a/workflows/ai-generated-1769399335669-workflow.yaml b/workflows/ai-generated-1769399335669-workflow.yaml deleted file mode 100644 index 7e49fbf..0000000 --- a/workflows/ai-generated-1769399335669-workflow.yaml +++ /dev/null @@ -1,8 +0,0 @@ -name: testing-workflow -description: "Generated workflow for testing" - -steps: - - id: test-step - action: shell - description: "Test step" - command: "echo test" diff --git a/workflows/ai-generated-1769399983672-workflow.yaml b/workflows/ai-generated-1769399983672-workflow.yaml deleted file mode 100644 index e7554fd..0000000 --- a/workflows/ai-generated-1769399983672-workflow.yaml +++ /dev/null @@ -1,16 +0,0 @@ -name: workflow-1769399983672 -description: "Generated from: create a deployment workflow with AI assistance" - -steps: - - id: shell-welcome - action: shell - description: "Welcome message" - command: "echo \"Starting AI-generated workflow\"" - - id: prompt-analyze - action: ai-prompt - description: "Analyze the task" - prompt: "Please analyze and help me with: create a deployment workflow with AI assistance" - - id: shell-complete - action: shell - description: "Completion message" - command: "echo \"Workflow completed successfully\"" diff --git a/workflows/ai-generated-1769400017340-workflow.yaml b/workflows/ai-generated-1769400017340-workflow.yaml deleted file mode 100644 index 8488127..0000000 --- a/workflows/ai-generated-1769400017340-workflow.yaml +++ /dev/null @@ -1,16 +0,0 @@ -name: workflow-1769400017340 -description: "Generated from: create a testing workflow" - -steps: - - id: shell-welcome - action: shell - description: "Welcome message" - command: "echo \"Starting AI-generated workflow\"" - - id: prompt-analyze - action: ai-prompt - description: "Analyze the task" - prompt: "Please analyze and help me with: create a testing workflow" - - id: shell-complete - action: shell - description: "Completion message" - command: "echo \"Workflow completed successfully\"" diff --git a/workflows/ai-generated-1769400017345-workflow.yaml b/workflows/ai-generated-1769400017345-workflow.yaml deleted file mode 100644 index 7e49fbf..0000000 --- a/workflows/ai-generated-1769400017345-workflow.yaml +++ /dev/null @@ -1,8 +0,0 @@ -name: testing-workflow -description: "Generated workflow for testing" - -steps: - - id: test-step - action: shell - description: "Test step" - command: "echo test" diff --git a/workflows/ai-generated-1769400756842-workflow.yaml b/workflows/ai-generated-1769400756842-workflow.yaml deleted file mode 100644 index 8781aa5..0000000 --- a/workflows/ai-generated-1769400756842-workflow.yaml +++ /dev/null @@ -1,16 +0,0 @@ -name: workflow-1769400756842 -description: "Generated from: create a testing workflow" - -steps: - - id: shell-welcome - action: shell - description: "Welcome message" - command: "echo \"Starting AI-generated workflow\"" - - id: prompt-analyze - action: ai-prompt - description: "Analyze the task" - prompt: "Please analyze and help me with: create a testing workflow" - - id: shell-complete - action: shell - description: "Completion message" - command: "echo \"Workflow completed successfully\"" diff --git a/workflows/ai-generated-1769400756849-workflow.yaml b/workflows/ai-generated-1769400756849-workflow.yaml deleted file mode 100644 index 7e49fbf..0000000 --- a/workflows/ai-generated-1769400756849-workflow.yaml +++ /dev/null @@ -1,8 +0,0 @@ -name: testing-workflow -description: "Generated workflow for testing" - -steps: - - id: test-step - action: shell - description: "Test step" - command: "echo test" diff --git a/workflows/ai-generated-1769400925128-workflow.yaml b/workflows/ai-generated-1769400925128-workflow.yaml deleted file mode 100644 index 2c64289..0000000 --- a/workflows/ai-generated-1769400925128-workflow.yaml +++ /dev/null @@ -1,16 +0,0 @@ -name: workflow-1769400925128 -description: "Generated from: create a testing workflow" - -steps: - - id: shell-welcome - action: shell - description: "Welcome message" - command: "echo \"Starting AI-generated workflow\"" - - id: prompt-analyze - action: ai-prompt - description: "Analyze the task" - prompt: "Please analyze and help me with: create a testing workflow" - - id: shell-complete - action: shell - description: "Completion message" - command: "echo \"Workflow completed successfully\"" diff --git a/workflows/ai-generated-1769400925132-workflow.yaml b/workflows/ai-generated-1769400925132-workflow.yaml deleted file mode 100644 index 7e49fbf..0000000 --- a/workflows/ai-generated-1769400925132-workflow.yaml +++ /dev/null @@ -1,8 +0,0 @@ -name: testing-workflow -description: "Generated workflow for testing" - -steps: - - id: test-step - action: shell - description: "Test step" - command: "echo test" diff --git a/workflows/ai-generated-1769401560678-workflow.yaml b/workflows/ai-generated-1769401560678-workflow.yaml deleted file mode 100644 index 724786a..0000000 --- a/workflows/ai-generated-1769401560678-workflow.yaml +++ /dev/null @@ -1,16 +0,0 @@ -name: workflow-1769401560678 -description: "Generated from: create a testing workflow" - -steps: - - id: shell-welcome - action: shell - description: "Welcome message" - command: "echo \"Starting AI-generated workflow\"" - - id: prompt-analyze - action: ai-prompt - description: "Analyze the task" - prompt: "Please analyze and help me with: create a testing workflow" - - id: shell-complete - action: shell - description: "Completion message" - command: "echo \"Workflow completed successfully\"" diff --git a/workflows/ai-generated-1769401560682-workflow.yaml b/workflows/ai-generated-1769401560682-workflow.yaml deleted file mode 100644 index 7e49fbf..0000000 --- a/workflows/ai-generated-1769401560682-workflow.yaml +++ /dev/null @@ -1,8 +0,0 @@ -name: testing-workflow -description: "Generated workflow for testing" - -steps: - - id: test-step - action: shell - description: "Test step" - command: "echo test" diff --git a/workflows/ai-generated-1769401583657-workflow.yaml b/workflows/ai-generated-1769401583657-workflow.yaml deleted file mode 100644 index 1f727a2..0000000 --- a/workflows/ai-generated-1769401583657-workflow.yaml +++ /dev/null @@ -1,16 +0,0 @@ -name: workflow-1769401583657 -description: "Generated from: create a testing workflow" - -steps: - - id: shell-welcome - action: shell - description: "Welcome message" - command: "echo \"Starting AI-generated workflow\"" - - id: prompt-analyze - action: ai-prompt - description: "Analyze the task" - prompt: "Please analyze and help me with: create a testing workflow" - - id: shell-complete - action: shell - description: "Completion message" - command: "echo \"Workflow completed successfully\"" diff --git a/workflows/ai-generated-1769401583664-workflow.yaml b/workflows/ai-generated-1769401583664-workflow.yaml deleted file mode 100644 index 7e49fbf..0000000 --- a/workflows/ai-generated-1769401583664-workflow.yaml +++ /dev/null @@ -1,8 +0,0 @@ -name: testing-workflow -description: "Generated workflow for testing" - -steps: - - id: test-step - action: shell - description: "Test step" - command: "echo test" diff --git a/workflows/ai-generated-1769402910583-workflow.yaml b/workflows/ai-generated-1769402910583-workflow.yaml deleted file mode 100644 index ed634c7..0000000 --- a/workflows/ai-generated-1769402910583-workflow.yaml +++ /dev/null @@ -1,16 +0,0 @@ -name: workflow-1769402910583 -description: "Generated from: create a testing workflow" - -steps: - - id: shell-welcome - action: shell - description: "Welcome message" - command: "echo \"Starting AI-generated workflow\"" - - id: prompt-analyze - action: ai-prompt - description: "Analyze the task" - prompt: "Please analyze and help me with: create a testing workflow" - - id: shell-complete - action: shell - description: "Completion message" - command: "echo \"Workflow completed successfully\"" diff --git a/workflows/ai-generated-1769402910587-workflow.yaml b/workflows/ai-generated-1769402910587-workflow.yaml deleted file mode 100644 index 7e49fbf..0000000 --- a/workflows/ai-generated-1769402910587-workflow.yaml +++ /dev/null @@ -1,8 +0,0 @@ -name: testing-workflow -description: "Generated workflow for testing" - -steps: - - id: test-step - action: shell - description: "Test step" - command: "echo test" diff --git a/workflows/ai-generated-1769403680807-workflow.yaml b/workflows/ai-generated-1769403680807-workflow.yaml deleted file mode 100644 index 5836a47..0000000 --- a/workflows/ai-generated-1769403680807-workflow.yaml +++ /dev/null @@ -1,16 +0,0 @@ -name: workflow-1769403680807 -description: "Generated from: create a testing workflow" - -steps: - - id: shell-welcome - action: shell - description: "Welcome message" - command: "echo \"Starting AI-generated workflow\"" - - id: prompt-analyze - action: ai-prompt - description: "Analyze the task" - prompt: "Please analyze and help me with: create a testing workflow" - - id: shell-complete - action: shell - description: "Completion message" - command: "echo \"Workflow completed successfully\"" diff --git a/workflows/ai-generated-1769403680812-workflow.yaml b/workflows/ai-generated-1769403680812-workflow.yaml deleted file mode 100644 index 7e49fbf..0000000 --- a/workflows/ai-generated-1769403680812-workflow.yaml +++ /dev/null @@ -1,8 +0,0 @@ -name: testing-workflow -description: "Generated workflow for testing" - -steps: - - id: test-step - action: shell - description: "Test step" - command: "echo test" diff --git a/workflows/ai-generated-1769403982277-workflow.yaml b/workflows/ai-generated-1769403982277-workflow.yaml deleted file mode 100644 index af8bc0e..0000000 --- a/workflows/ai-generated-1769403982277-workflow.yaml +++ /dev/null @@ -1,16 +0,0 @@ -name: workflow-1769403982277 -description: "Generated from: create a testing workflow" - -steps: - - id: shell-welcome - action: shell - description: "Welcome message" - command: "echo \"Starting AI-generated workflow\"" - - id: prompt-analyze - action: ai-prompt - description: "Analyze the task" - prompt: "Please analyze and help me with: create a testing workflow" - - id: shell-complete - action: shell - description: "Completion message" - command: "echo \"Workflow completed successfully\"" diff --git a/workflows/ai-generated-1769403982281-workflow.yaml b/workflows/ai-generated-1769403982281-workflow.yaml deleted file mode 100644 index 7e49fbf..0000000 --- a/workflows/ai-generated-1769403982281-workflow.yaml +++ /dev/null @@ -1,8 +0,0 @@ -name: testing-workflow -description: "Generated workflow for testing" - -steps: - - id: test-step - action: shell - description: "Test step" - command: "echo test" diff --git a/workflows/ai-generated-1769404619451-workflow.yaml b/workflows/ai-generated-1769404619451-workflow.yaml deleted file mode 100644 index 84ed9eb..0000000 --- a/workflows/ai-generated-1769404619451-workflow.yaml +++ /dev/null @@ -1,16 +0,0 @@ -name: workflow-1769404619451 -description: "Generated from: create a testing workflow" - -steps: - - id: shell-welcome - action: shell - description: "Welcome message" - command: "echo \"Starting AI-generated workflow\"" - - id: prompt-analyze - action: ai-prompt - description: "Analyze the task" - prompt: "Please analyze and help me with: create a testing workflow" - - id: shell-complete - action: shell - description: "Completion message" - command: "echo \"Workflow completed successfully\"" diff --git a/workflows/ai-generated-1769404619454-workflow.yaml b/workflows/ai-generated-1769404619454-workflow.yaml deleted file mode 100644 index 7e49fbf..0000000 --- a/workflows/ai-generated-1769404619454-workflow.yaml +++ /dev/null @@ -1,8 +0,0 @@ -name: testing-workflow -description: "Generated workflow for testing" - -steps: - - id: test-step - action: shell - description: "Test step" - command: "echo test" diff --git a/workflows/ai-generated-1769405466912-workflow.yaml b/workflows/ai-generated-1769405466912-workflow.yaml deleted file mode 100644 index 45175c7..0000000 --- a/workflows/ai-generated-1769405466912-workflow.yaml +++ /dev/null @@ -1,16 +0,0 @@ -name: workflow-1769405466912 -description: "Generated from: create a testing workflow" - -steps: - - id: shell-welcome - action: shell - description: "Welcome message" - command: "echo \"Starting AI-generated workflow\"" - - id: prompt-analyze - action: ai-prompt - description: "Analyze the task" - prompt: "Please analyze and help me with: create a testing workflow" - - id: shell-complete - action: shell - description: "Completion message" - command: "echo \"Workflow completed successfully\"" diff --git a/workflows/ai-generated-1769405466915-workflow.yaml b/workflows/ai-generated-1769405466915-workflow.yaml deleted file mode 100644 index 7e49fbf..0000000 --- a/workflows/ai-generated-1769405466915-workflow.yaml +++ /dev/null @@ -1,8 +0,0 @@ -name: testing-workflow -description: "Generated workflow for testing" - -steps: - - id: test-step - action: shell - description: "Test step" - command: "echo test" diff --git a/workflows/ai-generated-workflow.yaml b/workflows/ai-generated-workflow.yaml deleted file mode 100644 index 7f0cc0c..0000000 --- a/workflows/ai-generated-workflow.yaml +++ /dev/null @@ -1,16 +0,0 @@ -name: ai-generated-workflow -description: Generated from: create a testing workflow - -steps: - - id: shell-welcome - action: shell - description: Welcome message - command: "echo \"Starting AI-generated workflow\"" - - id: prompt-analyze - action: ai-prompt - description: Analyze the task - prompt: "Please analyze and help me with: create a testing workflow" - - id: shell-complete - action: shell - description: Completion message - command: "echo \"Workflow completed successfully\"" diff --git a/workflows/generated-deployment-1769396002531.yaml b/workflows/generated-deployment-1769396002531.yaml deleted file mode 100644 index 4486e15..0000000 --- a/workflows/generated-deployment-1769396002531.yaml +++ /dev/null @@ -1,20 +0,0 @@ -name: git-deployment -description: "Automated Git deployment workflow" - -steps: - - id: pull-changes - action: shell - description: "Pull latest changes" - command: "git pull origin main" - - id: run-tests - action: shell - description: "Run test suite" - command: "pnpm test" - - id: build-project - action: shell - description: "Build the project" - command: "pnpm build" - - id: deploy-step - action: shell - description: "Deploy the project" - command: "echo \"Deploying...\"" diff --git a/workflows/generated-deployment-1769396002534.yaml b/workflows/generated-deployment-1769396002534.yaml deleted file mode 100644 index 4486e15..0000000 --- a/workflows/generated-deployment-1769396002534.yaml +++ /dev/null @@ -1,20 +0,0 @@ -name: git-deployment -description: "Automated Git deployment workflow" - -steps: - - id: pull-changes - action: shell - description: "Pull latest changes" - command: "git pull origin main" - - id: run-tests - action: shell - description: "Run test suite" - command: "pnpm test" - - id: build-project - action: shell - description: "Build the project" - command: "pnpm build" - - id: deploy-step - action: shell - description: "Deploy the project" - command: "echo \"Deploying...\"" diff --git a/workflows/generated-deployment-1769396036339.yaml b/workflows/generated-deployment-1769396036339.yaml deleted file mode 100644 index 4486e15..0000000 --- a/workflows/generated-deployment-1769396036339.yaml +++ /dev/null @@ -1,20 +0,0 @@ -name: git-deployment -description: "Automated Git deployment workflow" - -steps: - - id: pull-changes - action: shell - description: "Pull latest changes" - command: "git pull origin main" - - id: run-tests - action: shell - description: "Run test suite" - command: "pnpm test" - - id: build-project - action: shell - description: "Build the project" - command: "pnpm build" - - id: deploy-step - action: shell - description: "Deploy the project" - command: "echo \"Deploying...\"" diff --git a/workflows/generated-deployment-1769396109005.yaml b/workflows/generated-deployment-1769396109005.yaml deleted file mode 100644 index 4486e15..0000000 --- a/workflows/generated-deployment-1769396109005.yaml +++ /dev/null @@ -1,20 +0,0 @@ -name: git-deployment -description: "Automated Git deployment workflow" - -steps: - - id: pull-changes - action: shell - description: "Pull latest changes" - command: "git pull origin main" - - id: run-tests - action: shell - description: "Run test suite" - command: "pnpm test" - - id: build-project - action: shell - description: "Build the project" - command: "pnpm build" - - id: deploy-step - action: shell - description: "Deploy the project" - command: "echo \"Deploying...\"" diff --git a/workflows/generated-deployment-1769396600347.yaml b/workflows/generated-deployment-1769396600347.yaml deleted file mode 100644 index 4486e15..0000000 --- a/workflows/generated-deployment-1769396600347.yaml +++ /dev/null @@ -1,20 +0,0 @@ -name: git-deployment -description: "Automated Git deployment workflow" - -steps: - - id: pull-changes - action: shell - description: "Pull latest changes" - command: "git pull origin main" - - id: run-tests - action: shell - description: "Run test suite" - command: "pnpm test" - - id: build-project - action: shell - description: "Build the project" - command: "pnpm build" - - id: deploy-step - action: shell - description: "Deploy the project" - command: "echo \"Deploying...\"" diff --git a/workflows/generated-deployment-1769396626431.yaml b/workflows/generated-deployment-1769396626431.yaml deleted file mode 100644 index 4486e15..0000000 --- a/workflows/generated-deployment-1769396626431.yaml +++ /dev/null @@ -1,20 +0,0 @@ -name: git-deployment -description: "Automated Git deployment workflow" - -steps: - - id: pull-changes - action: shell - description: "Pull latest changes" - command: "git pull origin main" - - id: run-tests - action: shell - description: "Run test suite" - command: "pnpm test" - - id: build-project - action: shell - description: "Build the project" - command: "pnpm build" - - id: deploy-step - action: shell - description: "Deploy the project" - command: "echo \"Deploying...\"" diff --git a/workflows/generated-deployment-1769396626432.yaml b/workflows/generated-deployment-1769396626432.yaml deleted file mode 100644 index 4486e15..0000000 --- a/workflows/generated-deployment-1769396626432.yaml +++ /dev/null @@ -1,20 +0,0 @@ -name: git-deployment -description: "Automated Git deployment workflow" - -steps: - - id: pull-changes - action: shell - description: "Pull latest changes" - command: "git pull origin main" - - id: run-tests - action: shell - description: "Run test suite" - command: "pnpm test" - - id: build-project - action: shell - description: "Build the project" - command: "pnpm build" - - id: deploy-step - action: shell - description: "Deploy the project" - command: "echo \"Deploying...\"" diff --git a/workflows/generated-deployment-1769398892699.yaml b/workflows/generated-deployment-1769398892699.yaml deleted file mode 100644 index 4486e15..0000000 --- a/workflows/generated-deployment-1769398892699.yaml +++ /dev/null @@ -1,20 +0,0 @@ -name: git-deployment -description: "Automated Git deployment workflow" - -steps: - - id: pull-changes - action: shell - description: "Pull latest changes" - command: "git pull origin main" - - id: run-tests - action: shell - description: "Run test suite" - command: "pnpm test" - - id: build-project - action: shell - description: "Build the project" - command: "pnpm build" - - id: deploy-step - action: shell - description: "Deploy the project" - command: "echo \"Deploying...\"" diff --git a/workflows/generated-deployment-1769399335668.yaml b/workflows/generated-deployment-1769399335668.yaml deleted file mode 100644 index 4486e15..0000000 --- a/workflows/generated-deployment-1769399335668.yaml +++ /dev/null @@ -1,20 +0,0 @@ -name: git-deployment -description: "Automated Git deployment workflow" - -steps: - - id: pull-changes - action: shell - description: "Pull latest changes" - command: "git pull origin main" - - id: run-tests - action: shell - description: "Run test suite" - command: "pnpm test" - - id: build-project - action: shell - description: "Build the project" - command: "pnpm build" - - id: deploy-step - action: shell - description: "Deploy the project" - command: "echo \"Deploying...\"" diff --git a/workflows/generated-deployment-1769399335671.yaml b/workflows/generated-deployment-1769399335671.yaml deleted file mode 100644 index 4486e15..0000000 --- a/workflows/generated-deployment-1769399335671.yaml +++ /dev/null @@ -1,20 +0,0 @@ -name: git-deployment -description: "Automated Git deployment workflow" - -steps: - - id: pull-changes - action: shell - description: "Pull latest changes" - command: "git pull origin main" - - id: run-tests - action: shell - description: "Run test suite" - command: "pnpm test" - - id: build-project - action: shell - description: "Build the project" - command: "pnpm build" - - id: deploy-step - action: shell - description: "Deploy the project" - command: "echo \"Deploying...\"" diff --git a/workflows/generated-deployment-1769400017344.yaml b/workflows/generated-deployment-1769400017344.yaml deleted file mode 100644 index 4486e15..0000000 --- a/workflows/generated-deployment-1769400017344.yaml +++ /dev/null @@ -1,20 +0,0 @@ -name: git-deployment -description: "Automated Git deployment workflow" - -steps: - - id: pull-changes - action: shell - description: "Pull latest changes" - command: "git pull origin main" - - id: run-tests - action: shell - description: "Run test suite" - command: "pnpm test" - - id: build-project - action: shell - description: "Build the project" - command: "pnpm build" - - id: deploy-step - action: shell - description: "Deploy the project" - command: "echo \"Deploying...\"" diff --git a/workflows/generated-deployment-1769400017347.yaml b/workflows/generated-deployment-1769400017347.yaml deleted file mode 100644 index 4486e15..0000000 --- a/workflows/generated-deployment-1769400017347.yaml +++ /dev/null @@ -1,20 +0,0 @@ -name: git-deployment -description: "Automated Git deployment workflow" - -steps: - - id: pull-changes - action: shell - description: "Pull latest changes" - command: "git pull origin main" - - id: run-tests - action: shell - description: "Run test suite" - command: "pnpm test" - - id: build-project - action: shell - description: "Build the project" - command: "pnpm build" - - id: deploy-step - action: shell - description: "Deploy the project" - command: "echo \"Deploying...\"" diff --git a/workflows/generated-deployment-1769400756848.yaml b/workflows/generated-deployment-1769400756848.yaml deleted file mode 100644 index 4486e15..0000000 --- a/workflows/generated-deployment-1769400756848.yaml +++ /dev/null @@ -1,20 +0,0 @@ -name: git-deployment -description: "Automated Git deployment workflow" - -steps: - - id: pull-changes - action: shell - description: "Pull latest changes" - command: "git pull origin main" - - id: run-tests - action: shell - description: "Run test suite" - command: "pnpm test" - - id: build-project - action: shell - description: "Build the project" - command: "pnpm build" - - id: deploy-step - action: shell - description: "Deploy the project" - command: "echo \"Deploying...\"" diff --git a/workflows/generated-deployment-1769400756853.yaml b/workflows/generated-deployment-1769400756853.yaml deleted file mode 100644 index 4486e15..0000000 --- a/workflows/generated-deployment-1769400756853.yaml +++ /dev/null @@ -1,20 +0,0 @@ -name: git-deployment -description: "Automated Git deployment workflow" - -steps: - - id: pull-changes - action: shell - description: "Pull latest changes" - command: "git pull origin main" - - id: run-tests - action: shell - description: "Run test suite" - command: "pnpm test" - - id: build-project - action: shell - description: "Build the project" - command: "pnpm build" - - id: deploy-step - action: shell - description: "Deploy the project" - command: "echo \"Deploying...\"" diff --git a/workflows/generated-deployment-1769400925131.yaml b/workflows/generated-deployment-1769400925131.yaml deleted file mode 100644 index 4486e15..0000000 --- a/workflows/generated-deployment-1769400925131.yaml +++ /dev/null @@ -1,20 +0,0 @@ -name: git-deployment -description: "Automated Git deployment workflow" - -steps: - - id: pull-changes - action: shell - description: "Pull latest changes" - command: "git pull origin main" - - id: run-tests - action: shell - description: "Run test suite" - command: "pnpm test" - - id: build-project - action: shell - description: "Build the project" - command: "pnpm build" - - id: deploy-step - action: shell - description: "Deploy the project" - command: "echo \"Deploying...\"" diff --git a/workflows/generated-deployment-1769400925134.yaml b/workflows/generated-deployment-1769400925134.yaml deleted file mode 100644 index 4486e15..0000000 --- a/workflows/generated-deployment-1769400925134.yaml +++ /dev/null @@ -1,20 +0,0 @@ -name: git-deployment -description: "Automated Git deployment workflow" - -steps: - - id: pull-changes - action: shell - description: "Pull latest changes" - command: "git pull origin main" - - id: run-tests - action: shell - description: "Run test suite" - command: "pnpm test" - - id: build-project - action: shell - description: "Build the project" - command: "pnpm build" - - id: deploy-step - action: shell - description: "Deploy the project" - command: "echo \"Deploying...\"" diff --git a/workflows/generated-deployment-1769400992235.yaml b/workflows/generated-deployment-1769400992235.yaml deleted file mode 100644 index 4486e15..0000000 --- a/workflows/generated-deployment-1769400992235.yaml +++ /dev/null @@ -1,20 +0,0 @@ -name: git-deployment -description: "Automated Git deployment workflow" - -steps: - - id: pull-changes - action: shell - description: "Pull latest changes" - command: "git pull origin main" - - id: run-tests - action: shell - description: "Run test suite" - command: "pnpm test" - - id: build-project - action: shell - description: "Build the project" - command: "pnpm build" - - id: deploy-step - action: shell - description: "Deploy the project" - command: "echo \"Deploying...\"" diff --git a/workflows/generated-deployment-1769401560681.yaml b/workflows/generated-deployment-1769401560681.yaml deleted file mode 100644 index 4486e15..0000000 --- a/workflows/generated-deployment-1769401560681.yaml +++ /dev/null @@ -1,20 +0,0 @@ -name: git-deployment -description: "Automated Git deployment workflow" - -steps: - - id: pull-changes - action: shell - description: "Pull latest changes" - command: "git pull origin main" - - id: run-tests - action: shell - description: "Run test suite" - command: "pnpm test" - - id: build-project - action: shell - description: "Build the project" - command: "pnpm build" - - id: deploy-step - action: shell - description: "Deploy the project" - command: "echo \"Deploying...\"" diff --git a/workflows/generated-deployment-1769401560685.yaml b/workflows/generated-deployment-1769401560685.yaml deleted file mode 100644 index 4486e15..0000000 --- a/workflows/generated-deployment-1769401560685.yaml +++ /dev/null @@ -1,20 +0,0 @@ -name: git-deployment -description: "Automated Git deployment workflow" - -steps: - - id: pull-changes - action: shell - description: "Pull latest changes" - command: "git pull origin main" - - id: run-tests - action: shell - description: "Run test suite" - command: "pnpm test" - - id: build-project - action: shell - description: "Build the project" - command: "pnpm build" - - id: deploy-step - action: shell - description: "Deploy the project" - command: "echo \"Deploying...\"" diff --git a/workflows/generated-deployment-1769401583663.yaml b/workflows/generated-deployment-1769401583663.yaml deleted file mode 100644 index 4486e15..0000000 --- a/workflows/generated-deployment-1769401583663.yaml +++ /dev/null @@ -1,20 +0,0 @@ -name: git-deployment -description: "Automated Git deployment workflow" - -steps: - - id: pull-changes - action: shell - description: "Pull latest changes" - command: "git pull origin main" - - id: run-tests - action: shell - description: "Run test suite" - command: "pnpm test" - - id: build-project - action: shell - description: "Build the project" - command: "pnpm build" - - id: deploy-step - action: shell - description: "Deploy the project" - command: "echo \"Deploying...\"" diff --git a/workflows/generated-deployment-1769401583669.yaml b/workflows/generated-deployment-1769401583669.yaml deleted file mode 100644 index 4486e15..0000000 --- a/workflows/generated-deployment-1769401583669.yaml +++ /dev/null @@ -1,20 +0,0 @@ -name: git-deployment -description: "Automated Git deployment workflow" - -steps: - - id: pull-changes - action: shell - description: "Pull latest changes" - command: "git pull origin main" - - id: run-tests - action: shell - description: "Run test suite" - command: "pnpm test" - - id: build-project - action: shell - description: "Build the project" - command: "pnpm build" - - id: deploy-step - action: shell - description: "Deploy the project" - command: "echo \"Deploying...\"" diff --git a/workflows/generated-deployment-1769402910587.yaml b/workflows/generated-deployment-1769402910587.yaml deleted file mode 100644 index 4486e15..0000000 --- a/workflows/generated-deployment-1769402910587.yaml +++ /dev/null @@ -1,20 +0,0 @@ -name: git-deployment -description: "Automated Git deployment workflow" - -steps: - - id: pull-changes - action: shell - description: "Pull latest changes" - command: "git pull origin main" - - id: run-tests - action: shell - description: "Run test suite" - command: "pnpm test" - - id: build-project - action: shell - description: "Build the project" - command: "pnpm build" - - id: deploy-step - action: shell - description: "Deploy the project" - command: "echo \"Deploying...\"" diff --git a/workflows/generated-deployment-1769402910590.yaml b/workflows/generated-deployment-1769402910590.yaml deleted file mode 100644 index 4486e15..0000000 --- a/workflows/generated-deployment-1769402910590.yaml +++ /dev/null @@ -1,20 +0,0 @@ -name: git-deployment -description: "Automated Git deployment workflow" - -steps: - - id: pull-changes - action: shell - description: "Pull latest changes" - command: "git pull origin main" - - id: run-tests - action: shell - description: "Run test suite" - command: "pnpm test" - - id: build-project - action: shell - description: "Build the project" - command: "pnpm build" - - id: deploy-step - action: shell - description: "Deploy the project" - command: "echo \"Deploying...\"" diff --git a/workflows/generated-deployment-1769403680811.yaml b/workflows/generated-deployment-1769403680811.yaml deleted file mode 100644 index 4486e15..0000000 --- a/workflows/generated-deployment-1769403680811.yaml +++ /dev/null @@ -1,20 +0,0 @@ -name: git-deployment -description: "Automated Git deployment workflow" - -steps: - - id: pull-changes - action: shell - description: "Pull latest changes" - command: "git pull origin main" - - id: run-tests - action: shell - description: "Run test suite" - command: "pnpm test" - - id: build-project - action: shell - description: "Build the project" - command: "pnpm build" - - id: deploy-step - action: shell - description: "Deploy the project" - command: "echo \"Deploying...\"" diff --git a/workflows/generated-deployment-1769403680816.yaml b/workflows/generated-deployment-1769403680816.yaml deleted file mode 100644 index 4486e15..0000000 --- a/workflows/generated-deployment-1769403680816.yaml +++ /dev/null @@ -1,20 +0,0 @@ -name: git-deployment -description: "Automated Git deployment workflow" - -steps: - - id: pull-changes - action: shell - description: "Pull latest changes" - command: "git pull origin main" - - id: run-tests - action: shell - description: "Run test suite" - command: "pnpm test" - - id: build-project - action: shell - description: "Build the project" - command: "pnpm build" - - id: deploy-step - action: shell - description: "Deploy the project" - command: "echo \"Deploying...\"" diff --git a/workflows/generated-deployment-1769403982281.yaml b/workflows/generated-deployment-1769403982281.yaml deleted file mode 100644 index 4486e15..0000000 --- a/workflows/generated-deployment-1769403982281.yaml +++ /dev/null @@ -1,20 +0,0 @@ -name: git-deployment -description: "Automated Git deployment workflow" - -steps: - - id: pull-changes - action: shell - description: "Pull latest changes" - command: "git pull origin main" - - id: run-tests - action: shell - description: "Run test suite" - command: "pnpm test" - - id: build-project - action: shell - description: "Build the project" - command: "pnpm build" - - id: deploy-step - action: shell - description: "Deploy the project" - command: "echo \"Deploying...\"" diff --git a/workflows/generated-deployment-1769403982284.yaml b/workflows/generated-deployment-1769403982284.yaml deleted file mode 100644 index 4486e15..0000000 --- a/workflows/generated-deployment-1769403982284.yaml +++ /dev/null @@ -1,20 +0,0 @@ -name: git-deployment -description: "Automated Git deployment workflow" - -steps: - - id: pull-changes - action: shell - description: "Pull latest changes" - command: "git pull origin main" - - id: run-tests - action: shell - description: "Run test suite" - command: "pnpm test" - - id: build-project - action: shell - description: "Build the project" - command: "pnpm build" - - id: deploy-step - action: shell - description: "Deploy the project" - command: "echo \"Deploying...\"" diff --git a/workflows/generated-deployment-1769404619453.yaml b/workflows/generated-deployment-1769404619453.yaml deleted file mode 100644 index 4486e15..0000000 --- a/workflows/generated-deployment-1769404619453.yaml +++ /dev/null @@ -1,20 +0,0 @@ -name: git-deployment -description: "Automated Git deployment workflow" - -steps: - - id: pull-changes - action: shell - description: "Pull latest changes" - command: "git pull origin main" - - id: run-tests - action: shell - description: "Run test suite" - command: "pnpm test" - - id: build-project - action: shell - description: "Build the project" - command: "pnpm build" - - id: deploy-step - action: shell - description: "Deploy the project" - command: "echo \"Deploying...\"" diff --git a/workflows/generated-deployment-1769404619457.yaml b/workflows/generated-deployment-1769404619457.yaml deleted file mode 100644 index 4486e15..0000000 --- a/workflows/generated-deployment-1769404619457.yaml +++ /dev/null @@ -1,20 +0,0 @@ -name: git-deployment -description: "Automated Git deployment workflow" - -steps: - - id: pull-changes - action: shell - description: "Pull latest changes" - command: "git pull origin main" - - id: run-tests - action: shell - description: "Run test suite" - command: "pnpm test" - - id: build-project - action: shell - description: "Build the project" - command: "pnpm build" - - id: deploy-step - action: shell - description: "Deploy the project" - command: "echo \"Deploying...\"" diff --git a/workflows/generated-deployment-1769405466915.yaml b/workflows/generated-deployment-1769405466915.yaml deleted file mode 100644 index 4486e15..0000000 --- a/workflows/generated-deployment-1769405466915.yaml +++ /dev/null @@ -1,20 +0,0 @@ -name: git-deployment -description: "Automated Git deployment workflow" - -steps: - - id: pull-changes - action: shell - description: "Pull latest changes" - command: "git pull origin main" - - id: run-tests - action: shell - description: "Run test suite" - command: "pnpm test" - - id: build-project - action: shell - description: "Build the project" - command: "pnpm build" - - id: deploy-step - action: shell - description: "Deploy the project" - command: "echo \"Deploying...\"" diff --git a/workflows/generated-deployment-1769405466918.yaml b/workflows/generated-deployment-1769405466918.yaml deleted file mode 100644 index 4486e15..0000000 --- a/workflows/generated-deployment-1769405466918.yaml +++ /dev/null @@ -1,20 +0,0 @@ -name: git-deployment -description: "Automated Git deployment workflow" - -steps: - - id: pull-changes - action: shell - description: "Pull latest changes" - command: "git pull origin main" - - id: run-tests - action: shell - description: "Run test suite" - command: "pnpm test" - - id: build-project - action: shell - description: "Build the project" - command: "pnpm build" - - id: deploy-step - action: shell - description: "Deploy the project" - command: "echo \"Deploying...\"" diff --git a/workflows/generated-deployment-workflow.yaml b/workflows/generated-deployment-workflow.yaml deleted file mode 100644 index 726467f..0000000 --- a/workflows/generated-deployment-workflow.yaml +++ /dev/null @@ -1,20 +0,0 @@ -name: git-deployment -description: Automated Git deployment workflow - -steps: - - id: pull-changes - action: shell - description: Pull latest changes - command: "git pull origin main" - - id: run-tests - action: shell - description: Run test suite - command: "pnpm test" - - id: build-project - action: shell - description: Build the project - command: "pnpm build" - - id: deploy-step - action: shell - description: Deploy the project - command: "echo \"Deploying...\""