An advanced local AI personal assistant with multi-provider support including Chinese domestic models, built with TypeScript and Node.js.
LocalBot is a powerful, modular AI assistant platform designed for developers and power users who need local AI capabilities with extensive tool integration.
LocalBot provides powerful tools for software development:
- File Operations: Read, write, copy, move, and manage files programmatically
- Shell Commands: Execute system commands, manage processes, and control the system
- API Operations: Make HTTP requests (GET, POST, PUT, DELETE, PATCH) and fetch web content
- Data Processing: Parse and process CSV/JSON files, perform text analysis and transformations
- Code Generation: Generate code snippets and assist with development tasks
Comprehensive tools for data analysis and visualization:
- CSV/JSON Processing: Read, write, and manipulate structured data files
- Text Analysis: Analyze text content, search, and replace patterns
- Mathematical Calculations: Perform complex mathematical operations
- Data Visualization: Create charts and visualizations from data
Powerful system management capabilities:
- System Information: Get detailed system information and resource usage
- Process Management: List, monitor, and kill processes
- Environment Variables: Read and modify environment variables
- Directory Operations: Navigate and manage directories
Advanced utilities for everyday tasks:
- Security Tools: Encrypt/decrypt data, generate hashes
- Compression: Compress and decompress files
- Encoding: Base64 encode/decode operations
- Random Generation: Generate UUIDs and random strings
- Multi-AI Model Routing: Support for mainstream AI models (OpenAI, Alibaba Cloud Qwen, Anthropic Claude, Baidu ERNIE Bot, Tencent HunYuan, Zhipu AI, SiliconCloud, Ollama)
- Enhanced Memory System: Three-tier memory architecture (diary, long-term, vector search) inspired by Clawdbot
- Advanced Skills System: Priority-based loading, hot reload, and dependency management inspired by Clawdbot
- Enhanced MCP Protocol: Tool caching, hints, tracking, and filtering capabilities inspired by Clawdbot
- Session Management: Multi-session support with conversation history
- RESTful API: Standardized API endpoints for external integrations
- Business Process Automation: Automated workflow execution with task scheduling
- Tool Execution: 44+ built-in tools across 5 categories
- Plugin System: Extensible architecture for adding custom tools and skills
LocalBot features a powerful OpenClaw-style skills system:
- Markdown-Based: Skills defined in SKILL.md files with metadata
- Dynamic Loading: Skills loaded automatically from workspace/skills directory
- Smart Matching: Automatic skill matching based on user intent
- 13 Built-in Skills: Pre-configured skills for common use cases
- Extensible: Easy to add custom skills
Available Skills:
- business-automation - Business process automation
- code-generation - Code generation and assistance
- code-review - Code review and analysis
- daily-life-assistant - Daily life tasks and assistance
- data-analysis - Data analysis and processing
- data-visualization - Data visualization and charting
- debugging - Debugging and troubleshooting
- file-operations - File system operations
- shell-commands - Shell command execution
- system-management - System management and monitoring
- testing - Testing and quality assurance
- text-processing - Text processing and manipulation
- web-development - Web development assistance
LocalBot supports multiple communication platforms for seamless integration:
- CLI: Command-line interface for local interactions
- REST API: Standardized HTTP API for external integrations
- MCP Protocol: Model Context Protocol for AI assistant integration
- Telegram: Telegram bot for instant messaging
- Discord: Discord bot for community interactions
- Slack: Slack bot for team collaboration
- WhatsApp: WhatsApp bot for personal messaging
- WeCom: Enterprise WeChat (企业微信) bot for enterprise messaging
- Web: Web interface for browser-based interactions
- Mobile: Android, HarmonyOS, and iOS deployment support
Platform Features:
- Unified Interface: Consistent API across all platforms
- Independent Sessions: Separate conversation history per platform
- Platform-Specific Data: Metadata and context preservation
- Easy Configuration: Simple environment variable setup
- Extensible: Easy to add new platforms
For detailed platform configuration, see Multi-Platform Guide, Mobile Deployment Guide, iOS Deployment Guide, Web Development Guide, and WeCom Integration Guide.
- System Control: Execute system commands and scripts
- Browser Automation: Automate web browsing and data extraction
- File Operations: Read, write, and manage files
- Network Requests: Make HTTP requests and API calls
- Custom Tools: Execute custom skills and tools
- Permission System: Fine-grained access control
- Approval Workflow: Optional user approval for actions
- Action Logging: Complete audit trail
For details, see Reverse Control Engine Guide.
- Cron Tasks: Schedule tasks with cron expressions
- Webhook Triggers: Trigger tasks via HTTP webhooks
- Monitoring Rules: Monitor GitHub, weather, prices, and custom conditions
- 7×24 Service: Always-on monitoring and alerting
- Action Types: Messages, workflows, notifications, and custom actions
- Event System: Real-time event notifications
- Task Management: Add, remove, and query tasks
For details, see Proactive Engine Guide.
- Multi-Role Stance Splitting: Creates 5 distinct thinking roles with different stances to generate conflicts and debates
- Logical Progression: Each round of thinking is deeper than previous, not simple repetition
- Self-Negation: Later iterations推翻 earlier conclusions, achieving true self-correction
- 5 Thinking Roles: Rational Analyst, Critical Questioner, Innovative Explorer, Pragmatist, Humanist
- Role Conflict System: Automatically detects and records conflicts between roles
- Depth Progression: Ensures each round has minimum depth progression (default: 5.0)
- Smart Triggering: Automatically detects questions requiring deep thinking
- Memory Storage: Automatically stores thinking processes for future reference
- Confidence Scoring: Calculates confidence level for each thinking process
- Configurable Parameters: Customize max rounds, role count, depth progression, and self-negation
Three Core Mechanisms of True Deep Thinking:
-
Stance Splitting (立场分裂): Generate conflicts
- Creates multiple characters with different stances and perspectives
- Each role has unique stance, viewpoint, and personality
- Roles debate and challenge each other's assumptions
- Identifies commonalities and differences
-
Logical Progression (逻辑递进): Generate reasoning
- Each round of thinking is deeper than the previous
- Cumulative reasoning based on previous rounds
- Layer-by-layer deepening from surface to essence
- Builds complete logical reasoning chains
-
Self-Negation (自我否定): Generate corrections
- Actively identifies limitations of previous rounds
- Overturns imperfect viewpoints and conclusions
- Reconstructs frameworks based on new insights
- Continuously optimizes through self-negation
For details, see Deep Thinking Engine Guide.
file_read- Read file contentsfile_write- Write content to filesfile_list- List files in directoriesfile_delete- Delete filesfile_copy- Copy filesfile_move- Move/rename filesfile_stat- Get file statistics
shell_execute- Execute shell commandsprocess_list- List running processessystem_info- Get system informationenvironment_variable- Get/set environment variablesenvironment_list- List all environment variablesdirectory_change- Change current directorydirectory_get_current- Get current directoryprocess_kill- Kill processes
http_get- HTTP GET requestshttp_post- HTTP POST requestshttp_put- HTTP PUT requestshttp_delete- HTTP DELETE requestshttp_patch- HTTP PATCH requestsweb_fetch- Fetch web contentjson_parse- Parse JSON stringsjson_stringify- Stringify objects to JSON
csv_read- Read CSV filescsv_write- Write CSV filesjson_read- Read JSON filesjson_write- Write JSON filestext_analysis- Analyze text contenttext_search- Search text patternstext_replace- Replace text patternsmath_calculate- Mathematical calculationsjson_list- List JSON array elementsmean_value- Calculate mean value from numbersbar_chart- Create bar charts
encrypt- Encrypt datadecrypt- Decrypt datahash- Generate hash valuescompress- Compress datadecompress- Decompress database64_encode- Base64 encodebase64_decode- Base64 decodeuuid_generate- Generate UUIDsrandom_string- Generate random strings
self_programming- Generate, compile, and load new tools or plugins dynamically
- Persistent Storage: Store important information for future reference
- Semantic Search: Search memories by content and tags
- Tagging: Organize memories with tags for easy retrieval
- Importance Levels: Prioritize memories by importance
- Automatic Cleanup: Automatic memory management and cleanup
- Multi-Session Support: Manage multiple conversation sessions
- Conversation History: Track conversation history within sessions
- Session Persistence: Save and restore sessions
- Context Management: Maintain context across conversations
- Task Scheduling: Schedule tasks for specific times or intervals
- Workflow Engine: Execute complex workflows with multiple steps
- Monitoring System: Monitor system resources and activities
- Automation Controller: Control and manage automated processes
- Dynamic Plugin Loading: Load plugins from
./pluginsdirectory - Self-Programming Tool: AI can generate, compile, and load new tools dynamically
- Security Validation: Built-in plugin security validator for safe plugin execution
- Extensible Architecture: Easy to add custom plugins and tools
- Message Processing:
/api/v1/message- Process user messages - Session Management:
/api/v1/session/*- Manage sessions - Health Check:
/health- Service health status - Standardized Responses: Consistent API response format
- Request Tracing: Built-in request ID tracking
- TypeScript - Main development language
- Node.js - Runtime environment (v20+)
- OpenAI SDK - LLM integration (supports multiple providers)
- Express - RESTful API server
- Winston - Logging framework
- Ollama - Local LLM support
- Playwright - Browser automation
- npm - Package manager
| Provider | Models |
|---|---|
| OpenAI | GPT-4, GPT-3.5-turbo |
| Alibaba Cloud (Qwen) | qwen-plus, qwen-turbo, qwen-max |
| Anthropic | Claude-3-opus, Claude-3-sonnet |
| Baidu (ERNIE Bot) | ERNIE-Bot series |
| Tencent (HunYuan) | HunYuan series |
| Zhipu AI (ChatGLM) | ChatGLM series |
| SiliconCloud | Various open-source models including Qwen |
| Ollama | Local models (llama3.2, etc.) |
local-bot/
├── src/
│ ├── agent/ # AI agent core logic
│ │ ├── AgentProcessor.ts # Main AI processor
│ │ └── MultiAIRouter.ts # Multi-AI routing
│ ├── skills/ # Skills and tools system
│ │ ├── SkillManager.ts # Tool and skill management
│ │ ├── SkillsHub.ts # OpenClaw-style skills
│ │ ├── EnhancedSkillsHub.ts # Enhanced skills with priority and hot reload
│ │ ├── tools/ # Tool implementations
│ │ │ ├── FileTools.ts
│ │ │ ├── ShellTools.ts
│ │ │ ├── ApiTools.ts
│ │ │ ├── DataTools.ts
│ │ │ └── UtilityTools.ts
│ │ └── registerTools.ts # Tool registration
│ ├── memory/ # Memory system
│ │ ├── MemorySystem.ts # Persistent memory storage
│ │ └── EnhancedMemorySystem.ts # Three-tier memory architecture
│ ├── session/ # Session management
│ │ └── SessionManager.ts # Session handling
│ ├── tasks/ # Task scheduling and automation
│ │ ├── AutomationController.ts
│ │ ├── TaskScheduler.ts
│ │ ├── WorkflowEngine.ts
│ │ └── MonitoringSystem.ts
│ ├── business-processes/ # Business process models
│ │ ├── BusinessProcessManager.ts
│ │ ├── SalesProcessModel.ts
│ │ ├── FinanceProcessModel.ts
│ │ ├── HRProcessModel.ts
│ │ ├── OperationsProcessModel.ts
│ │ ├── HomeAutomationModel.ts
│ │ ├── TaxPlanningModel.ts
│ │ ├── ProjectManagementModel.ts
│ │ ├── CRMModel.ts
│ │ ├── MarketingModel.ts
│ │ ├── LegalComplianceModel.ts
│ │ ├── DataAnalyticsReportModel.ts
│ │ └── PersonalAssistantModel.ts
│ ├── api/ # API layer
│ │ ├── ApiService.ts
│ │ └── ApiResponse.ts
│ ├── plugins/ # Plugin system
│ │ ├── PluginManager.ts
│ │ ├── PluginSecurityValidator.ts
│ │ ├── PluginTypes.ts
│ │ └── SelfProgrammingTool.ts
│ ├── services/ # External services
│ │ ├── AIService.ts
│ │ ├── OpenAIService.ts
│ │ ├── QwenService.ts
│ │ ├── ClaudeService.ts
│ │ ├── ERNIEBotService.ts
│ │ ├── HunYuanService.ts
│ │ ├── ZhipuAIService.ts
│ │ ├── SiliconCloudService.ts
│ │ └── OllamaService.ts
│ ├── mcp/ # Model Context Protocol
│ │ ├── MCPProtocol.ts # MCP protocol definitions
│ │ ├── EnhancedMCPProtocol.ts # Enhanced MCP with caching and tracking
│ │ ├── MCPServer.ts # MCP server implementation
│ │ ├── MCPCLI.ts # MCP CLI interface
│ │ └── MCPStdioTransport.ts # MCP stdio transport
│ ├── utils/ # Utility functions
│ │ ├── Logger.ts
│ │ └── RetryHandler.ts
│ ├── gateway/ # API gateway
│ │ └── Gateway.ts
│ ├── interface/ # CLI interface
│ │ └── CLIInterface.ts
│ └── index.ts # Entry point
├── workspace/
│ └── skills/ # Skill definitions (Markdown)
│ ├── business-automation/
│ ├── code-generation/
│ ├── data-analysis/
│ ├── data-visualization/
│ └── ...
├── plugins/ # Plugin directory
│ ├── examples/
│ │ ├── hello-world-plugin/
│ │ └── weather-plugin/
│ └── ...
├── memory/ # Memory storage (auto-created)
├── sessions/ # Session data (auto-created)
└── logs/ # Log files (auto-created)
You can install LocalBot globally as a CLI tool:
npm install -g .After installation, you can use the localbot command from anywhere:
localbot- Clone the repository:
git clone <repository-url>
cd local-bot- Install dependencies:
npm install- Copy environment variables:
cp .env.example .env- Edit
.envand configure your LLM provider:
LLM_PROVIDER=openai
OPENAI_API_KEY=your_openai_api_key_hereLLM_PROVIDER=aliyun
ALIYUN_API_KEY=your_aliyun_api_key_here
ALIYUN_MODEL=qwen-plusLLM_PROVIDER=ollama
OLLAMA_API_URL=http://localhost:11434
OLLAMA_MODEL_NAME=llama3.2npm startIn CLI mode, you can use the following commands:
help- Show help informationtools- List all available toolsskills- List all available skillsmemory- Show recent memoryclear- Clear session historyai <provider>- Switch AI providerstats- Show AI usage statisticsprocess- List all available business processesrun <process-name>- Execute a specific business processexit- Exit the assistant
Deep Thinking: The assistant automatically detects complex questions and triggers deep thinking when needed. Deep thinking involves:
- Multi-role stance splitting with 5 different perspectives
- Logical progression across multiple thinking rounds
- Self-negation to refine and improve conclusions
- Role conflict generation and resolution
- Depth progression tracking
Example questions that trigger deep thinking:
- "Why does artificial intelligence need deep learning?"
- "How to balance economic development and environmental protection?"
- "What is consciousness?"
npm run start:serverLocalBot supports MCP protocol and can integrate with MCP-compatible clients (like Claude Desktop, Cursor, etc.):
npm run start:mcpOr use CLI directly:
localbot --mcpMCP Configuration Example:
Add to Claude Desktop configuration file:
{
"mcpServers": {
"localbot": {
"command": "node",
"args": ["<path-to-localbot>\\dist\\index.js"],
"env": {
"RUN_MODE": "mcp"
}
}
}
}Note:
- Replace
<path-to-localbot>with your actual LocalBot project path - Windows paths use double backslashes
\\ - macOS/Linux paths use forward slashes
/
Examples:
- Windows:
E:\\work\\202601211205\\local-bot\\dist\\index.js - macOS/Linux:
/Users/username/local-bot/dist/index.js - Prompt Templates: Pre-defined prompt templates for common tasks
For detailed MCP documentation, see docs/MCP_PROTOCOL.md.
npm run devnpm run buildcurl -X POST http://localhost:3000/api/v1/message \
-H "Content-Type: application/json" \
-d '{
"message": "Hello, how can you help me?",
"sessionId": "session-123"
}'curl http://localhost:3000/healthcurl http://localhost:3000/api/v1/sessions| Variable | Description | Default |
|---|---|---|
LLM_PROVIDER |
LLM provider (openai, aliyun, anthropic, baidu, tencent, zhipu, siliconcloud, ollama) | openai |
OPENAI_API_KEY |
OpenAI API key | - |
ALIYUN_API_KEY |
Alibaba Cloud API key | - |
ALIYUN_MODEL |
Alibaba Cloud model | qwen-plus |
OLLAMA_API_URL |
Ollama API URL | http://localhost:11434 |
OLLAMA_MODEL_NAME |
Ollama model name | llama3.2 |
PORT |
Server port | 3000 |
LOG_LEVEL |
Log level (error, warn, info, debug) | info |
MEMORY_DIR |
Memory storage directory | ./memory |
SKILLS_DIR |
Skills directory | ./workspace/skills |
ENABLE_PERSISTENCE |
Enable session persistence | true |
PERSISTENCE_DIR |
Persistence directory | ./sessions |
DEEP_THINKING_ENABLED |
Enable deep thinking engine | true |
DEEP_THINKING_MAX_ROUNDS |
Maximum thinking rounds | 3 |
DEEP_THINKING_ROLE_COUNT |
Number of thinking roles | 5 |
DEEP_THINKING_MIN_DEPTH_PROGRESSION |
Minimum depth progression per round | 5.0 |
DEEP_THINKING_SELF_NEGATION |
Enable self-negation mechanism | true |
DEEP_THINKING_CONFLICT_GENERATION |
Enable role conflict generation | true |
DEEP_THINKING_MAX_TIME |
Maximum thinking time (ms) | 60000 |
- Create a new directory in
workspace/skills/:
mkdir workspace/skills/my-skill- Create a
SKILL.mdfile:
---
name: my-skill
description: My custom skill
emoji: 🎯
category: custom
version: 1.0.0
---
# My Custom Skill
## When to Use
- Describe when to use this skill
## How to Use
1. Step 1
2. Step 2
3. Step 3
## Example
User request: "Example request"
Your response: "Example response"- Restart the assistant to load the new skill
name: Unique skill identifierdescription: Skill descriptionemoji: Skill emoji (optional)category: Skill category (optional)version: Skill version (optional)author: Skill author (optional)requires: Required binaries and environment variables (optional)
- Implement the
Toolinterface:
import { Tool, ToolResult } from '../types';
export class MyTool implements Tool {
name = 'my_tool';
description = 'Description of my tool';
category = 'other' as const;
async execute(params: Record<string, unknown>): Promise<ToolResult> {
try {
// Tool logic here
return {
success: true,
data: { result: 'success' }
};
} catch (error) {
return {
success: false,
error: (error as Error).message
};
}
}
}- Register the tool in
registerTools.ts:
import { MyTool } from './tools/MyTools';
export function registerDefaultTools(skillManager: SkillManager): void {
const myTools = [new MyTool()];
myTools.forEach(tool => skillManager.registerTool(tool));
const mySkill: Skill = {
name: 'my-skill',
description: 'My custom skill',
tools: myTools,
enabled: true,
permissions: myTools.map(tool => ({
toolName: tool.name,
allowed: true,
requireConfirmation: false
}))
};
skillManager.registerSkill(mySkill);
}The plugin system allows you to extend LocalBot's functionality with custom tools and features.
Create your plugin in the plugins/ directory:
mkdir plugins/my-plugin
cd plugins/my-pluginCreate plugin.json:
{
"name": "my-plugin",
"version": "1.0.0",
"description": "My custom plugin",
"author": "Your Name",
"main": "index.ts",
"permissions": ["file_read", "file_write"],
"dependencies": []
}Create index.ts:
import { Plugin, Tool } from '../../src/plugins/PluginTypes';
export class MyTool implements Tool {
name = 'my_custom_tool';
description = 'My custom tool';
category = 'custom' as const;
async execute(params: Record<string, unknown>): Promise<any> {
try {
// Tool logic
return {
success: true,
data: { result: 'Success' }
};
} catch (error) {
return {
success: false,
error: (error as Error).message
};
}
}
}
export const plugin: Plugin = {
name: 'my-plugin',
version: '1.0.0',
description: 'My custom plugin',
author: 'Your Name',
tools: [new MyTool()],
onLoad: async () => {
console.log('Plugin loaded successfully');
},
onUnload: async () => {
console.log('Plugin unloaded');
}
};AI can use the self_programming tool to dynamically generate new tools:
User: Help me create a tool to calculate Fibonacci numbers
AI will use the self_programming tool:
1. Generate tool code
2. Compile the code
3. Load the new tool into the system
4. Return tool usage instructions
The plugin system includes security validation mechanisms:
- Code sandbox execution
- Permission checks
- API call limits
- Resource usage monitoring
LocalBot provides 11 business domains with 44 predefined business processes:
- Customer development process
- Opportunity management process
- Sales performance analysis process
- Budget management process
- Expense reimbursement process
- Financial reporting process
- Tax processing process
- Inventory management process
- Supply chain optimization process
- Quality control process
- Recruitment process
- Employee onboarding process
- Performance evaluation process
- Training management process
- Smart lighting control
- Temperature adjustment process
- Security monitoring process
- Tax planning process
- Deduction optimization process
- Compliance check process
- Project planning process
- Task assignment process
- Progress tracking process
- Customer acquisition process
- Customer retention process
- Customer support process
- Campaign planning process
- Content marketing process
- Social media management process
- Compliance check process
- Document management process
- Audit preparation process
- Data collection process
- Analysis reporting process
- Visualization presentation process
- Schedule management process
- Task reminder process
- Information summary process
process
run <process-name>
For example:
run budget-management-process
run recruitment-process
run project-planning-process
You can create custom business process models in the src/business-processes/ directory:
import { WorkflowDefinition, BusinessDomain } from './BusinessProcessManager';
export const myCustomProcess: WorkflowDefinition = {
name: 'my-custom-process',
description: 'My custom business process',
domain: BusinessDomain.OPERATIONS,
steps: [
{
id: 'step1',
name: 'Step 1',
description: 'Execute step 1',
tool: 'tool_name',
parameters: { /* parameters */ }
},
{
id: 'step2',
name: 'Step 2',
description: 'Execute step 2',
tool: 'tool_name',
parameters: { /* parameters */ }
}
]
};Build and run with Docker:
docker build -t localbot .
docker run -p 3000:3000 localbotdocker-compose up -dkubectl apply -f k8s-deployment.yamlFor detailed documentation, see:
- API Documentation
- API Specification
- Architecture Overview
- Architecture Optimization Guide
- Skills System
- Automation Capabilities
- Business Processes
- Custom Skills and Models Guide
- GPU Setup
- Ollama Configuration & Troubleshooting
- Plugin Development Guide
- Reverse Control Engine
- Proactive Engine
- Deep Thinking Engine
- Multi-Platform Guide
- Mobile Deployment Guide
- iOS Deployment Guide
- Web Development Guide
- WeCom Integration Guide
-
Empty responses from AI
- Check LLM provider configuration
- Verify API keys are valid
- Check network connectivity
- Review logs in
logs/combined.log
-
Tools not executing
- Verify tools are registered in
registerTools.ts - Check tool permissions
- Review error logs
- Verify tools are registered in
-
Skills not loading
- Ensure SKILL.md files are properly formatted
- Check skills directory path
- Verify metadata is correct
-
Deep thinking not triggering
- Check if
DEEP_THINKING_ENABLEDis set totrue - Ensure the question contains deep thinking indicators (why, how, analyze, etc.)
- Review logs for "Deep thinking triggered" messages
- Verify
DEEP_THINKING_MAX_TIMEis sufficient for complex questions
- Check if
-
Deep thinking takes too long
- Reduce
DEEP_THINKING_MAX_ROUNDS(default: 3) - Reduce
DEEP_THINKING_ROLE_COUNT(default: 5) - Increase
DEEP_THINKING_MIN_DEPTH_PROGRESSIONto stop earlier - Reduce
DEEP_THINKING_MAX_TIME(default: 60000ms)
- Reduce
Contributions are welcome! Please feel free to submit a pull request.
MIT