β οΈ BETA SOFTWARE WARNING
Conduit is currently in beta development. The MCP server core functionality is stable and ready for general use with MCP clients (VS Code Copilot, Cline, Claude Desktop). However, LLM integration, Agent framework, and Swarm features are experimental and still undergoing testing. Use these advanced features with caution in production environments. Please report issues and provide feedback to help improve the project.
Conduit is a comprehensive AI framework and universal MCP platform that bridges traditional MCP tools with advanced AI agent systems. Built in Go, it provides everything from simple tool execution to sophisticated multi-agent swarms with multi-LLM coordination. Whether you need basic MCP client integration or complex autonomous AI workflows, Conduit scales from simple library usage to enterprise-grade agent orchestration with support for local models (Ollama) and cloud providers (OpenAI, DeepInfra).
Conduit implements the standard MCP (Model Context Protocol) specification and works with any MCP-compatible client.
- VS Code Copilot - Full integration with all 31 tools
- Cline - Complete tool discovery and functionality
- Claude Desktop - Standard MCP stdio support
Since Conduit follows the MCP specification, it should work with:
- Anthropic Claude Desktop
- Continue.dev
- Cursor IDE
- Any custom MCP client implementation
Tested and confirmed with
- Vs Code Co-Pilot
- Vs Code Cline
- Warp CLI
All clients will have access to Conduit's complete toolkit of 31 tools for enhanced AI assistance.
Conduit's tool calling system has been thoroughly tested and verified with real LLM implementations:
- Model: llama3.2 (and compatible models)
- Tool Selection: β LLM automatically chooses correct tools
- Parameter Extraction: β LLM correctly extracts parameters from natural language
- Tool Execution: β All 31 tools execute successfully
- Result Processing: β LLM processes tool results and generates natural responses
- Error Handling: β Proper error handling and user feedback Note: Ollama was meant for mcp testing, so expect breaking changes. main focus is to supply mcp to llm clients.
# β
Text manipulation - VERIFIED WORKING
User: "convert hello world to uppercase"
LLM: Automatically selects `uppercase` tool β Returns "HELLO WORLD"
# β
UUID generation - VERIFIED WORKING
User: "generate a UUID for me"
LLM: Automatically selects `uuid` tool β Returns generated UUID
# β
Base64 encoding - VERIFIED WORKING
User: "encode Mountain123 in base64"
LLM: Automatically selects `base64_encode` tool β Returns "TW91bnRhaW4xMjM="
# β
Memory operations - VERIFIED WORKING
User: "remember that my favorite color is blue"
LLM: Automatically selects `remember` tool β Stores informationResult: 100% success rate with automatic tool selection and execution.
go get github.com/benozo/conduit// main.go
package main
import (
"log"
conduit "github.com/benozo/conduit/lib"
"github.com/benozo/conduit/lib/tools"
"github.com/benozo/conduit/mcp"
)
func main() {
config := conduit.DefaultConfig()
config.Mode = mcp.ModeStdio
server := conduit.NewServer(config)
tools.RegisterTextTools(server)
tools.RegisterMemoryTools(server)
tools.RegisterUtilityTools(server)
log.Fatal(server.Start())
}go build -o my-mcp-server .Add to your MCP client configuration:
{
"command": "/path/to/my-mcp-server",
"args": ["--stdio"]
}# Start Ollama
ollama serve
ollama pull llama3.2
# Start your tool-enabled server
./my-mcp-server --http
# Test natural language tool calling
curl -X POST http://localhost:8080/chat \
-H "Content-Type: application/json" \
-d '{"message": "convert hello world to uppercase"}'- π Universal MCP Compatibility: Works with any MCP client (VS Code Copilot, Cline, Claude Desktop, and more)
- π€ AI Agent Framework: Complete autonomous agent system with task planning, execution, and monitoring
- π Multi-Agent Swarms: OpenAI Swarm-inspired coordination with intelligent task routing and handoffs
- π§ Multi-LLM Architecture: Each agent can use specialized LLM providers (Ollama, OpenAI, DeepInfra, etc.)
- β‘ Intelligent Tool Calling: LLMs automatically choose and execute tools based on natural language requests
- π Advanced Workflows: DAG, Supervisor, Pipeline, and Conditional workflow orchestration patterns
- π ReAct Reasoning: Built-in reasoning and action capabilities with transparent decision making
- π Local AI Support: Privacy-focused local models via Ollama integration
- βοΈ Cloud AI Integration: Production-ready OpenAI, DeepInfra, and custom API support
- π§ Dual Protocol Support: stdio (for MCP clients) and HTTP/SSE (for web applications)
- π Embeddable Design: Use as standalone server, Go library, or embedded in existing applications
- π οΈ Enhanced Tool Registration: Rich schema support with type validation and comprehensive documentation
- πΎ Memory Management: Persistent memory system for context, conversation history, and agent coordination
- π― Production Ready: Enterprise-grade error handling, logging, monitoring, and configuration
- π οΈ Comprehensive Tool Suite: 31+ battle-tested tools for text, memory, and utility operations
Conduit includes a powerful AI Agents framework that provides high-level abstractions for creating autonomous agents that can execute complex tasks using your MCP tools.
- Agent Management: Create, configure, and manage multiple specialized AI agents
- Task Execution: Assign complex tasks to agents with automatic planning and execution
- Specialized Agents: Pre-built agents for math, text processing, memory management, and utilities
- Custom Agents: Create custom agents with specific tool sets and behaviors
- Task Monitoring: Real-time task progress tracking and execution monitoring
- Memory Integration: Per-agent memory and context management
package main
import (
"github.com/benozo/conduit/agents"
conduit "github.com/benozo/conduit/lib"
"github.com/benozo/conduit/lib/tools"
)
func main() {
// Create MCP server
config := conduit.DefaultConfig()
server := conduit.NewEnhancedServer(config)
tools.RegisterTextTools(server)
tools.RegisterMemoryTools(server)
tools.RegisterUtilityTools(server)
// Create agent manager
agentManager := agents.NewMCPAgentManager(server)
// Create specialized agents
agentManager.CreateSpecializedAgents()
// Create and execute a math task
task, _ := agentManager.CreateTaskForAgent("math_agent", agents.TaskTypeMath, map[string]interface{}{
"a": 25.0,
"b": 15.0,
"operation": "multiply",
})
// Start server and execute task
go server.Start()
agentManager.ExecuteTask(task.ID)
// Task automatically plans and executes: 25 Γ 15 = 375
}Conduit now supports multi-agent, multi-LLM architecture, allowing each agent in a swarm to use its own specialized LLM provider and model. This enables optimal model selection for specific tasks while maintaining full backward compatibility.
- Per-Agent LLM Configuration: Each agent can use a different LLM provider (Ollama, OpenAI, DeepInfra, etc.)
- Task-Specific Model Selection: Match models to their strengths (code generation, content creation, analysis)
- Cost Optimization: Use local models (Ollama) for routing and cloud models (GPT-4) for complex reasoning
- Provider Redundancy: Fallback mechanisms and provider diversity for reliability
- Full Backward Compatibility: Existing swarm code continues to work unchanged
// Create agents with different LLM providers
coordinator := swarmClient.CreateAgentWithModel("coordinator",
"Route tasks efficiently", []string{},
&conduit.ModelConfig{
Provider: "ollama", Model: "llama3.2",
URL: "http://localhost:11434",
})
dataAnalyst := swarmClient.CreateAgentWithModel("analyst",
"Perform complex analysis", []string{"word_count"},
&conduit.ModelConfig{
Provider: "openai", Model: "gpt-4",
APIKey: os.Getenv("OPENAI_API_KEY"),
})
codeGenerator := swarmClient.CreateAgentWithModel("coder",
"Generate optimized code", []string{"json_format"},
&conduit.ModelConfig{
Provider: "deepinfra", Model: "Qwen/Qwen2.5-Coder-32B-Instruct",
APIKey: os.Getenv("DEEPINFRA_API_KEY"),
})- Ollama: Local models (llama3.2, qwen2.5, codellama) - Fast, private, cost-effective
- OpenAI: GPT-4, GPT-3.5-turbo - Premium reasoning and analysis
- DeepInfra: Qwen Coder, Llama models - Specialized code generation and processing
See examples/multi_llm_swarm/ for a complete working example.
- Math Agent: Specialized in mathematical calculations (
add,multiply) - Text Agent: Text processing and analysis (
word_count,uppercase,lowercase) - Memory Agent: Data storage and retrieval (
remember,recall,forget) - Utility Agent: Encoding, hashing, generation (
base64_encode,uuid,timestamp) - General Agent: Multi-purpose agent with mixed tool capabilities
See agents/README.md for complete documentation and examples.
Conduit includes a powerful Agent Swarm framework inspired by OpenAI's Swarm pattern, enabling sophisticated multi-agent coordination with MCP tools. This system focuses on lightweight, scalable agent orchestration using two key primitives: Agents and Handoffs.
βββββββββββββββββββ ββββββββββββββββββββ βββββββββββββββββββ
β Coordinator βββββΆβ ContentCreator β β DataAnalyst β
β (Task Router) β β (Text Processing)β β (Data Analysis) β
βββββββββββββββββββ ββββββββββββββββββββ βββββββββββββββββββ
β β
β ββββββββββββββββββββ β
βββββββββββββββΆβ MemoryManager βββββββββββββ
β (Info Storage) β
ββββββββββββββββββββ
- Agent Coordination: Lightweight agent handoffs and communication
- Context Variables: Shared state management across agent conversations
- Function Calling: Rich tool integration with MCP server capabilities
- Memory Persistence: Shared memory across agent interactions
- Natural Language Interface: Human-friendly multi-agent workflows
// Create swarm client with MCP tools
swarm := conduit.NewSwarmClient(mcpServer)
// Define specialized agents
coordinatorAgent := swarm.CreateAgent("coordinator", "Route tasks to appropriate agents", []string{
"transfer_to_content", "transfer_to_data", "transfer_to_memory",
})
contentAgent := swarm.CreateAgent("content_creator", "Handle text processing tasks", []string{
"uppercase", "lowercase", "snake_case", "camel_case", "word_count",
})
// Execute swarm workflow
result := swarm.Run(coordinatorAgent, []conduit.Message{
{Role: "user", Content: "Convert 'Agent Swarm Integration' to snake_case and remember it"},
}, map[string]interface{}{
"session_id": "demo_session",
})
// β Coordinator routes to ContentCreator for text processing
// β Then routes to MemoryManager for storage
// β Result: agent_swarm_integration stored in memory- Text Processing: Multi-step text transformations across agents
- Data Pipeline: Analysis, encoding, and storage workflows
- Memory Operations: Intelligent information management
- Complex Coordination: Multi-agent task decomposition and execution
See examples/agent_swarm/, examples/agent_swarm_simple/, and examples/agent_swarm_llm/ for complete documentation and examples.
The Agent Swarm framework includes sophisticated workflow orchestration patterns:
- Sequential: Ordered step-by-step execution (ETL pipelines)
- Parallel: Concurrent independent execution (batch processing)
- DAG: Dependency-based execution graphs (complex data processing)
- Supervisor: Hierarchical oversight and control (mission-critical processes)
- Pipeline: Data flow through transformation stages (content processing)
- Conditional: Dynamic branching based on runtime conditions (quality control)
See examples/agent_swarm_workflows/ for advanced workflow pattern examples.
# Basic agent swarm demo (rule-based)
cd examples/agent_swarm
go run main.go
# LLM-powered agent swarm with Ollama
cd examples/agent_swarm_llm
go run main.go
# Advanced workflow patterns demo
cd examples/agent_swarm_workflows
go run main.go
# Simple swarm demo
cd examples/agent_swarm_simple
# Install dependencies
go mod tidy
# Run the full demo
go run main.go
# Or run with custom LLM integration
go run llm_demo.go
# Or use the test script
./test_agent_swarm.shInstall Conduit in your Go project:
go get github.com/benozo/conduitThen create your own MCP server:
package main
import (
"log"
conduit "github.com/benozo/conduit/lib"
"github.com/benozo/conduit/lib/tools"
"github.com/benozo/conduit/mcp"
)
func main() {
// Create configuration
config := conduit.DefaultConfig()
config.Port = 8080
config.Mode = mcp.ModeStdio // For MCP clients
// Create server
server := conduit.NewServer(config)
// Register tool packages
tools.RegisterTextTools(server)
tools.RegisterMemoryTools(server)
tools.RegisterUtilityTools(server)
// Register custom tools
server.RegisterTool("my_tool", func(params map[string]interface{}, memory *mcp.Memory) (interface{}, error) {
return map[string]string{"result": "Hello from my tool!"}, nil
})
// Start server
log.Fatal(server.Start())
}For tools that need rich parameter validation and documentation:
package main
import (
"log"
conduit "github.com/benozo/conduit/lib"
"github.com/benozo/conduit/lib/tools"
"github.com/benozo/conduit/mcp"
)
func main() {
config := conduit.DefaultConfig()
config.Mode = mcp.ModeStdio
// Create enhanced server
server := conduit.NewEnhancedServer(config)
// Register standard tools
tools.RegisterTextTools(server.Server)
tools.RegisterMemoryTools(server.Server)
// Register custom tool with rich schema
server.RegisterToolWithSchema("weather",
func(params map[string]interface{}, memory *mcp.Memory) (interface{}, error) {
city := params["city"].(string)
return map[string]interface{}{
"result": fmt.Sprintf("Weather in %s: Sunny, 72Β°F", city),
"city": city, "temperature": "72Β°F", "condition": "Sunny",
}, nil
},
conduit.CreateToolMetadata("weather", "Get weather for a city", map[string]interface{}{
"city": conduit.StringParam("City name to get weather for"),
}, []string{"city"}))
log.Fatal(server.Start())
}Build and use with any MCP client:
go build -o my-mcp-server .
./my-mcp-server --stdio # For MCP clients
./my-mcp-server --http # For HTTP API
./my-mcp-server --both # For both protocolsFor development or testing, you can also clone and run directly:
git clone https://github.com/benozo/conduit
cd conduit
go run main.go --stdio # For MCP clients (VS Code Copilot, Cline, etc.)
go run main.go --http # For HTTP API and web applications
go run main.go --both # For both protocols simultaneouslyVS Code Copilot:
{
"mcp.mcpServers": {
"my-mcp-server": {
"command": "/path/to/my-mcp-server",
"args": ["--stdio"],
"env": {}
}
}
}Cline:
{
"mcpServers": {
"my-mcp-server": {
"autoApprove": [],
"disabled": false,
"timeout": 60,
"type": "stdio",
"command": "/path/to/my-mcp-server",
"args": ["--stdio"]
}
}
}Claude Desktop:
{
"mcpServers": {
"my-mcp-server": {
"command": "/path/to/my-mcp-server",
"args": ["--stdio"]
}
}
}Embed Conduit directly in your existing Go application:
package main
import (
"log"
conduit "github.com/benozo/conduit/lib"
"github.com/benozo/conduit/lib/tools"
"github.com/benozo/conduit/mcp"
)
func main() {
// Create configuration
config := conduit.DefaultConfig()
config.Port = 8081
config.Mode = mcp.ModeHTTP
// Create server
server := conduit.NewServer(config)
// Register tool packages
tools.RegisterTextTools(server)
tools.RegisterMemoryTools(server)
tools.RegisterUtilityTools(server)
// Register custom tools
server.RegisterTool("my_tool", func(params map[string]interface{}, memory *mcp.Memory) (interface{}, error) {
return map[string]string{"result": "Hello from my tool!"}, nil
})
// Start server
log.Fatal(server.Start())
}Use MCP components directly without any server (you implement your own):
package main
import "github.com/benozo/conduit/mcp"
func main() {
// Create components
memory := mcp.NewMemory()
tools := mcp.NewToolRegistry()
// Register tools
tools.Register("my_tool", func(params map[string]interface{}, memory *mcp.Memory) (interface{}, error) {
return map[string]string{"result": "Hello!"}, nil
})
// Use directly
result, err := tools.Call("my_tool", map[string]interface{}{}, memory)
// Integrate into your own web server, CLI, gRPC service, etc.
}config := &conduit.Config{
Port: 8080, // HTTP server port
OllamaURL: "http://localhost:11434", // Ollama API URL
Mode: mcp.ServerMode // Server mode
Environment map[string]string // Environment variables
EnableCORS bool // Enable CORS
EnableHTTPS bool // Enable HTTPS
CertFile string // HTTPS certificate file
KeyFile string // HTTPS key file
EnableLogging bool // Enable logging
}// Standard server with default config
server := conduit.NewServer(nil)
// Standard server with custom config
server := conduit.NewServer(config)
// Enhanced server with rich schema support
server := conduit.NewEnhancedServer(config)
// Standard server with custom model
server := conduit.NewServerWithModel(config, myModelFunc)// Register tool packages
tools.RegisterTextTools(server) // Text manipulation tools
tools.RegisterMemoryTools(server) // Memory management tools
tools.RegisterUtilityTools(server) // Utility tools
// Register individual tools
server.RegisterTool("my_tool", func(params map[string]interface{}, memory *mcp.Memory) (interface{}, error) {
// Tool implementation
return result, nil
})For tools that need rich parameter validation and documentation, use the enhanced registration system:
// Create enhanced server
server := conduit.NewEnhancedServer(config)
// Register standard tools (optional)
tools.RegisterTextTools(server.Server)
tools.RegisterMemoryTools(server.Server)
tools.RegisterUtilityTools(server.Server)
// Register custom tools with full schema metadata
server.RegisterToolWithSchema("calculate",
func(params map[string]interface{}, memory *mcp.Memory) (interface{}, error) {
operation := params["operation"].(string)
a := params["a"].(float64)
b := params["b"].(float64)
var result float64
switch operation {
case "add":
result = a + b
case "multiply":
result = a * b
default:
return nil, fmt.Errorf("unknown operation: %s", operation)
}
return map[string]interface{}{"result": result}, nil
},
conduit.CreateToolMetadata("calculate", "Perform mathematical operations", map[string]interface{}{
"operation": conduit.EnumParam("Mathematical operation", []string{"add", "multiply"}),
"a": conduit.NumberParam("First number"),
"b": conduit.NumberParam("Second number"),
}, []string{"operation", "a", "b"}))
// Start with enhanced schema support
server.Start()// Parameter type helpers
conduit.NumberParam("Description") // Numbers
conduit.StringParam("Description") // Strings
conduit.BoolParam("Description") // Booleans
conduit.ArrayParam("Description", "itemType") // Arrays
conduit.EnumParam("Description", []string{"opt1", "opt2"}) // Enums
// Complete metadata builder
conduit.CreateToolMetadata(name, description, properties, required)- π Rich Schemas: Full JSON Schema validation with parameter types and descriptions
- π Better Documentation: MCP clients show detailed parameter information
- β Type Safety: Automatic parameter validation and error handling
- π― IDE Support: Better autocomplete and hints in MCP clients
- π§ Professional: Production-ready tool definitions
// Use default Ollama model
server := conduit.NewServer(config) // Uses default Ollama
// Set custom model
server.SetModel(func(ctx mcp.ContextInput, req mcp.MCPRequest, memory *mcp.Memory, onToken mcp.StreamCallback) (string, error) {
// Custom model implementation
return response, nil
})
// Use built-in model helpers
ollamaModel := conduit.CreateOllamaModel("http://localhost:11434")
server.SetModel(ollamaModel)uppercase- Convert text to uppercaselowercase- Convert text to lowercasereverse- Reverse textword_count- Count words in texttrim- Trim whitespacetitle_case- Convert to title casesnake_case- Convert to snake_casecamel_case- Convert to camelCasereplace- Replace text patternsextract_words- Extract words from textsort_words- Sort words alphabeticallychar_count- Count charactersremove_whitespace- Remove whitespace
remember- Store information in memoryrecall- Retrieve stored informationforget- Remove information from memorylist_memories- List all stored memoriesclear_memory- Clear all memoriesmemory_stats- Get memory statistics
timestamp- Generate timestampsuuid- Generate UUIDsbase64_encode- Base64 encodingbase64_decode- Base64 decodingurl_encode- URL encodingurl_decode- URL decodinghash_md5- MD5 hashinghash_sha256- SHA256 hashingjson_format- Format JSONjson_minify- Minify JSONrandom_number- Generate random numbersrandom_string- Generate random strings
When running in HTTP mode, the server exposes these endpoints:
GET /schema- List available tools and their schemasGET /health- Health check endpointPOST /tool- Direct tool call endpoint (JSON)POST /chat- Natural language chat with automatic tool selection (JSON)POST /mcp- MCP protocol endpoint with Server-Sent Events (SSE)POST /react- ReAct agent endpoint for reasoning and action
# Call a specific tool directly
curl -X POST http://localhost:8080/tool \
-H "Content-Type: application/json" \
-d '{"name": "uppercase", "params": {"text": "hello world"}}'The /chat endpoint allows LLMs (like Ollama) to automatically select and call tools based on natural language input:
# Let the LLM decide which tools to use
curl -X POST http://localhost:8080/chat \
-H "Content-Type: application/json" \
-d '{"message": "convert hello world to uppercase"}'
# The LLM will automatically:
# 1. Analyze the request
# 2. Select the appropriate tool (uppercase)
# 3. Execute the tool with correct parameters
# 4. Return a natural language response# Get available tools
curl http://localhost:8080/schema
# Health check
curl http://localhost:8080/healthConduit includes built-in support for LLM integration with automatic tool selection. The LLM can analyze natural language requests and automatically choose the right tools.
Conduit provides seamless integration with Ollama for local LLM tool calling:
package main
import (
"log"
"os"
conduit "github.com/benozo/conduit/lib"
"github.com/benozo/conduit/lib/tools"
"github.com/benozo/conduit/mcp"
)
func main() {
config := conduit.DefaultConfig()
config.Mode = mcp.ModeHTTP
config.Port = 9090
// Configure Ollama integration
config.OllamaURL = "http://localhost:11434" // Your Ollama server
model := "llama3.2" // Or any Ollama model you have
server := conduit.NewEnhancedServer(config)
// Register all available tools
tools.RegisterTextTools(server.Server)
tools.RegisterMemoryTools(server.Server)
tools.RegisterUtilityTools(server.Server)
// Set up Ollama model with tool awareness
ollamaModel := conduit.CreateOllamaToolAwareModel(config.OllamaURL, server.GetTools())
server.SetModel(ollamaModel)
log.Printf("Starting Ollama-powered server on port %d", config.Port)
log.Fatal(server.Start())
}When you send a request to /chat, here's what happens:
- User Request: "convert hello world to uppercase"
- LLM Analysis: Ollama analyzes the request and available tools
- Tool Selection: LLM chooses the
uppercasetool automatically - Tool Execution: Tool runs with parameters
{"text": "hello world"} - Result Integration: LLM receives tool result and generates natural response
- Final Response: "The text 'hello world' in uppercase is: HELLO WORLD"
# Start your Ollama server
ollama serve
# Pull a model (if not already available)
ollama pull llama3.2
# Test natural language tool calling
curl -X POST http://localhost:9090/chat \
-H "Content-Type: application/json" \
-d '{"message": "generate a UUID for me"}'
curl -X POST http://localhost:9090/chat \
-H "Content-Type: application/json" \
-d '{"message": "encode the text Mountain123 in base64"}'
curl -X POST http://localhost:9090/chat \
-H "Content-Type: application/json" \
-d '{"message": "remember that my favorite color is blue"}'- β Natural Language: Use tools via conversational requests
- β Automatic Selection: LLM chooses the right tools for each task
- β Context Aware: LLM understands tool relationships and can chain operations
- β Error Handling: LLM can retry or explain tool failures
- β Rich Responses: Get natural language explanations with tool results
| Example | Type | Description | Try It |
|---|---|---|---|
| OpenAI Integration | Production API | Full OpenAI integration with comprehensive tools | examples/openai |
| Local Ollama | Local LLM | Privacy-focused local AI with tool calling | examples/ollama |
| Agent Swarm | Multi-Agent | Intelligent agent coordination and handoffs | examples/agent_swarm_simple |
| Pure Library | Go Integration | Embed MCP components in your applications | examples/pure_library |
# π€ OpenAI-powered tool server (production-ready)
cd examples/openai && OPENAI_API_KEY=sk-your-key go run main.go
# π Local Ollama integration (privacy-focused)
cd examples/ollama && go run main.go
# π Multi-agent coordination (autonomous)
cd examples/agent_swarm_simple && go run main.go
# π Library-only usage (embeddable)
cd examples/pure_library && go run main.goSee full example list β examples/README.md
The examples/ directory contains complete demonstrations of different usage patterns:
pure_library/- Use Conduit as a Go library in your applicationpure_library_cli/- Command-line tool built with Conduitpure_library_web/- Web server with embedded Conduitembedded/- Embed Conduit in existing applicationscustom_tools/- Register custom tools with enhanced schemas
stdio_example/- MCP stdio server for VS Code Copilot, Cline, etc.sse_example/- HTTP Server-Sent Events for web applicationspure_mcp/- Pure MCP implementation
ollama/- Complete Ollama integration with tool callingmodel_integration/- Custom model integration patternsreact/- ReAct agent with reasoning and actions
ai_agents/- AI Agents framework with task managementagents_test/- Basic agent functionality testingagents_ollama/- Agents with Ollama LLM integrationagents_deepinfra/- Agents with DeepInfra LLM integrationagents_library_mode/- Library-mode agent usageagents_mock_llm/- Mock LLM for testing agentsagents_vue_builder/- Vue.js application builder agentagent_swarm/- Basic agent swarm coordination (rule-based)agent_swarm_llm/- LLM-powered agent swarm with Ollama intelligenceagent_swarm_simple/- Simple agent swarm demoagent_swarm_workflows/- Advanced workflow patterns (DAG, Supervisor, Pipeline, Conditional, etc.)
# Try the Ollama integration example
cd examples/ollama
go run main.go
# Test with curl
curl -X POST http://localhost:9090/chat \
-H "Content-Type: application/json" \
-d '{"message": "convert hello world to uppercase"}'
# Try the stdio example for MCP clients
cd examples/stdio_example
go run main.go --stdio
# Test the pure library example
cd examples/pure_library
go run main.go
# Try AI Agents framework
cd examples/ai_agents
go run main.go
# Try basic Agent Swarm (rule-based)
cd examples/agent_swarm
go run main.go
# Try LLM-powered Agent Swarm with Ollama
cd examples/agent_swarm_llm
go run main.go
# Try advanced workflow patterns
cd examples/agent_swarm_workflows
go run main.go
# Try Agents with Ollama
cd examples/agents_ollama
go run main.goEach example includes a README with specific instructions and use cases.
- ModeStdio: Runs stdio MCP server for universal MCP client integration
- ModeHTTP: Runs HTTP/SSE server for web applications and custom integrations
- ModeBoth: Runs both protocols simultaneously for maximum compatibility
type Config struct {
Port: 8080, // HTTP server port (default: 8080)
OllamaURL: "http://localhost:11434", // Ollama API URL
Mode: mcp.ServerMode // Server mode
Environment map[string]string // Environment variables
EnableCORS bool // Enable CORS
EnableHTTPS bool // Enable HTTPS
CertFile string // HTTPS certificate file
KeyFile string // HTTPS key file
EnableLogging bool // Enable logging
}Check the examples/ directory for more usage examples:
stdio_example/- MCP stdio server for client integration (VS Code, Cline, etc.)sse_example/- HTTP/SSE server for web applications and real-time integrationembedded/- Basic embedded usage with server wrappercustom_tools/- Enhanced tool registration with rich schemas and validationmodel_integration/- Custom model integration patternspure_library/- Pure library usage without any serverpure_library_cli/- CLI tool using MCP componentspure_library_web/- Custom web server using MCP componentspure_mcp/- Direct MCP usage exampleollama/- Ollama integration with local LLM supportdirect_ollama/- Direct Ollama model usage without server
react/- ReAct agent (Reasoning + Acting) patterndirect_mcp/- Raw MCP package ReAct usage
ai_agents/- AI Agents framework with autonomous task executionagents_test/- Basic agent functionality and testingagents_ollama/- Agents with Ollama LLM integrationagents_deepinfra/- Agents with DeepInfra LLM integrationagents_library_mode/- Library-mode agent usage patternsagents_mock_llm/- Mock LLM for agent testing and developmentagents_vue_builder/- Vue.js application builder agentagent_swarm/- Basic agent swarm coordination and handoffs (rule-based)agent_swarm_llm/- LLM-powered agent swarm with Ollama intelligenceagent_swarm_simple/- Simple agent swarm demoagent_swarm_workflows/- Advanced workflow patterns (DAG, Supervisor, Pipeline, Conditional, etc.)
1. MCP Stdio Server (for VS Code, Cline, etc.):
cd examples/stdio_example && go run main.go
# Stdio MCP server for client integration2. HTTP/SSE Server (for web applications):
cd examples/sse_example && go run main.go
# HTTP server at http://localhost:8090 with SSE support
# Visit http://localhost:8090/demo for interactive demo3. Ollama Integration:
cd examples/ollama && go run main.go
# Server at http://localhost:8084 with Ollama backend4. ReAct Agent:
cd examples/react && go run main.go
# ReAct pattern server at http://localhost:80855. Direct MCP Usage:
cd examples/react/direct_mcp && go run main.go
# Pure MCP package demo (no server)6. AI Agents Framework:
cd examples/ai_agents && go run main.go
# AI Agents with autonomous task execution7. Agent Swarm Basic:
cd examples/agent_swarm && go run main.go
# Multi-agent coordination and handoffs8. Advanced Workflow Patterns:
cd examples/agent_swarm_workflows && go run main.go
# DAG, Supervisor, Pipeline, and Conditional workflows9. Agents with Ollama:
cd examples/agents_ollama && go run main.go
# AI Agents powered by Ollama LLM# Install as library (recommended)
go get github.com/benozo/conduit
# Build your own MCP server
go build -o my-mcp-server .
# Run tests (if developing Conduit itself)
go test ./...After building your own MCP server with Conduit (via go get github.com/benozo/conduit), configure any MCP-compatible client:
{
"mcp.mcpServers": {
"my-conduit-server": {
"command": "/path/to/my-mcp-server",
"args": ["--stdio"],
"env": {}
}
}
}{
"mcpServers": {
"my-conduit-server": {
"autoApprove": [],
"disabled": false,
"timeout": 60,
"type": "stdio",
"command": "/path/to/my-mcp-server",
"args": ["--stdio"]
}
}
}{
"mcpServers": {
"my-conduit-server": {
"command": "/path/to/my-mcp-server",
"args": ["--stdio"]
}
}
}For any other MCP client, use the standard MCP stdio configuration:
- Command:
/path/to/my-mcp-server - Args:
["--stdio"] - Protocol: stdio
Note: Replace /path/to/my-mcp-server with the actual path to your built binary.
To test that all tools are available to any MCP client:
# Test your built MCP server
echo '{"jsonrpc": "2.0", "id": 1, "method": "tools/list", "params": {}}' | \
./my-mcp-server --stdio | jq '.result.tools | length'Should show 31 tools available.
To verify the encoding/formatting tools work correctly:
# Test base64_decode
echo '{"jsonrpc": "2.0", "id": 1, "method": "tools/call", "params": {"name": "base64_decode", "arguments": {"text": "SGVsbG8gV29ybGQ="}}}' | ./my-mcp-server --stdio
# Test url_decode
echo '{"jsonrpc": "2.0", "id": 2, "method": "tools/call", "params": {"name": "url_decode", "arguments": {"text": "Hello%20World%21"}}}' | ./my-mcp-server --stdio
# Test json_format
echo '{"jsonrpc": "2.0", "id": 3, "method": "tools/call", "params": {"name": "json_format", "arguments": {"text": "{\"name\":\"test\",\"value\":123}"}}}' | ./my-mcp-server --stdio
# Test json_minify
echo '{"jsonrpc": "2.0", "id": 4, "method": "tools/call", "params": {"name": "json_minify", "arguments": {"text": "{\n \"name\": \"test\",\n \"value\": 123\n}"}}}' | ./my-mcp-server --stdioAll tools should return proper JSON-RPC responses with results.
Error: "no required module provides package"
- Make sure you've run
go get github.com/benozo/conduit - Ensure your
go.modfile includes the Conduit dependency - Run
go mod tidyto resolve dependencies
Error: "Connection closed"
- Verify your binary builds correctly:
go build -o my-mcp-server . - Test the binary manually:
echo '{"jsonrpc": "2.0", "id": 1, "method": "tools/list", "params": {}}' | ./my-mcp-server --stdio
MCP Client Not Detecting Tools
- Verify the client supports MCP stdio protocol
- Check client configuration points to the correct binary path
- Test the stdio interface manually (see verification steps above)
- Ensure proper timeout settings (some clients may need 30-60 seconds)
MIT License - see LICENSE file for details.
