A production-ready Go library for building MCP (Model Context Protocol) agents that connect to multiple MCP servers and execute tools using LLMs. This is a fully independent package that can be used in any Go application.
MCP Agent is a Go library that provides a complete framework for building AI agents that interact with MCP servers. It handles:
- Multi-Server MCP Connections: Connect to multiple MCP servers simultaneously (HTTP, SSE, stdio protocols)
- LLM Integration: Works with OpenAI, AWS Bedrock, Google Vertex AI, and other LLM providers
- Tool Execution: Automatic tool discovery, execution, and result handling
- Code Execution Mode: Execute Go code instead of JSON tool calls for complex workflows
- Smart Routing: Dynamically filter tools based on conversation context
- Large Output Handling: Automatically handle tool outputs that exceed context limits
- Observability: Built-in tracing with Langfuse support
- Caching: Intelligent caching of MCP server metadata and tool definitions
# Add to your go.mod
go get mcpagent
# Or use replace directive for local development
replace mcpagent => ../mcpagentpackage main
import (
"context"
"time"
mcpagent "mcpagent/agent"
"mcpagent/llm"
)
func main() {
// Initialize LLM
llmModel, err := llm.InitializeLLM(llm.Config{
Provider: llm.ProviderOpenAI,
ModelID: "gpt-4.1",
APIKeys: &llm.ProviderAPIKeys{
OpenAI: &openAIKey,
},
})
// Create agent
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Minute)
defer cancel()
agent, err := mcpagent.NewAgent(
ctx,
llmModel,
"", // server name (empty = all servers)
"mcp_servers.json", // MCP config path
"gpt-4.1", // model ID
nil, // tracer (optional)
"", // trace ID
nil, // logger (optional)
)
// Ask a question
response, err := agent.Ask(ctx, "What tools are available?")
fmt.Println(response)
}See examples/ for complete working examples.
The default mode where the LLM invokes tools directly through native tool calling:
agent, err := mcpagent.NewAgent(
ctx, llmModel, "", "config.json", "model-id",
nil, "", nil,
mcpagent.WithMode(mcpagent.SimpleAgent),
)Execute Go code instead of JSON tool calls for complex logic:
agent, err := mcpagent.NewAgent(
ctx, llmModel, "", "config.json", "model-id",
nil, "", nil,
mcpagent.WithCodeExecutionMode(true),
mcpagent.SetFolderGuardPaths([]string{"/workspace"}, []string{"/workspace"}),
)The LLM can write Go programs that import and use MCP tools as native functions.
Dynamically filter tools based on conversation context to reduce token usage:
agent, err := mcpagent.NewAgent(
ctx, llmModel, "", "config.json", "model-id",
nil, "", nil,
mcpagent.WithSmartRouting(true),
mcpagent.WithSmartRoutingThresholds(20, 3), // max tools, max servers
)Automatically handle tool outputs that exceed context limits:
agent, err := mcpagent.NewAgent(
ctx, llmModel, "", "config.json", "model-id",
nil, "", nil,
mcpagent.WithLargeOutputVirtualTools(true),
mcpagent.WithLargeOutputThreshold(20000), // characters
)Intelligent caching reduces connection times by 60-85%:
// Caching is enabled by default
// Configure via environment variables:
// MCP_CACHE_DIR=/path/to/cache
// MCP_CACHE_TTL_MINUTES=10080 (7 days)Built-in tracing with Langfuse support:
tracer := observability.NewLangfuseTracer(...)
agent, err := mcpagent.NewAgent(
ctx, llmModel, "", "config.json", "model-id",
tracer, "trace-id", logger,
)Comprehensive documentation is available in the docs/ directory:
- Code Execution Agent - Execute Go code with MCP tools
- Tool-Use Agent - Standard tool calling mode
- Smart Routing - Dynamic tool filtering
- Large Output Handling - Handle large tool outputs
- MCP Cache System - Server metadata caching
- Folder Guard - Fine-grained file access control
- LLM Resilience - Error handling and fallbacks
- Event System - Event architecture
- Token Tracking - Usage monitoring
Create a JSON file with your MCP servers:
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "./demo"]
},
"memory": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-memory"]
}
}
}The agent supports extensive configuration via functional options:
agent, err := mcpagent.NewAgent(
ctx, llmModel, "", "config.json", "model-id",
tracer, traceID, logger,
// Agent mode
mcpagent.WithMode(mcpagent.SimpleAgent),
// Conversation settings
mcpagent.WithMaxTurns(30),
mcpagent.WithTemperature(0.7),
mcpagent.WithToolChoice("auto"),
// Code execution
mcpagent.WithCodeExecutionMode(true),
mcpagent.SetFolderGuardPaths(allowedRead, allowedWrite),
// Smart routing
mcpagent.WithSmartRouting(true),
mcpagent.WithSmartRoutingThresholds(20, 3),
// Large output handling
mcpagent.WithLargeOutputVirtualTools(true),
mcpagent.WithLargeOutputThreshold(20000),
// Custom tools
mcpagent.WithCustomTools(customTools),
// Tool selection
mcpagent.WithSelectedTools([]string{"server1:tool1", "server2:*"}),
mcpagent.WithSelectedServers([]string{"server1", "server2"}),
)The package includes comprehensive testing utilities:
# Run all tests
cd cmd/testing
go test ./...
# Run specific test
go run testing.go agent-mcp --log-file logs/test.log
go run testing.go code-exec --log-file logs/test.log
go run testing.go smart-routing --log-file logs/test.logSee cmd/testing/README.md for details.
mcpagent/
βββ agent/ # Core agent implementation
β βββ agent.go # Main Agent struct and NewAgent()
β βββ conversation.go # Conversation loop and tool execution
β βββ connection.go # MCP server connection management
β βββ ...
βββ mcpclient/ # MCP client implementations
β βββ client.go # Client interface and implementations
β βββ stdio_manager.go # stdio protocol
β βββ sse_manager.go # SSE protocol
β βββ http_manager.go # HTTP protocol
βββ mcpcache/ # Caching system
β βββ manager.go # Cache manager
β βββ codegen/ # Code generation for tools
βββ llm/ # LLM provider integration
β βββ providers.go # Provider implementations
β βββ types.go # LLM types
βββ events/ # Event system
β βββ data.go # Event data structures
β βββ types.go # Event types
βββ logger/ # Logging
β βββ v2/ # Logger v2 interface
βββ observability/ # Tracing and observability
β βββ tracer.go # Tracer interface
β βββ langfuse_tracer.go # Langfuse implementation
βββ executor/ # Tool execution handlers
βββ examples/ # Example applications
βββ docs/ # Documentation
- OpenAI: GPT-4, GPT-3.5, and other models
- AWS Bedrock: Claude Sonnet, Claude Haiku, and other models
- Google Vertex AI: Gemini, PaLM, and other models
- Custom Providers: Extensible provider interface
- stdio: Standard input/output (most common)
- SSE: Server-Sent Events
- HTTP: REST API
Contributions are welcome! Please see the Documentation Writing Guide for standards.
This project is licensed under the MIT License - see the LICENSE file for details.
- MCP Protocol: Built on the Model Context Protocol
- multi-llm-provider-go: LLM provider abstraction layer
- mcp-go: MCP protocol implementation
Made with β€οΈ for the AI community