A sophisticated context-aware chatbot system with a dynamic tool-based architecture that manages conversation state and integrates with Matrix and Farcaster platforms. The system implements an advanced world state management approach for maintaining conversation context across multi-platform interactions.
- π§ Dynamic Tool Architecture: Extensible tool system with runtime registration and AI integration
- π§ Context-Aware Conversations: Maintains evolving world state across conversations with advanced deduplication
- π Multi-Platform Integration: Support for Matrix and Farcaster with standardized tool interfaces
- ποΈ AI Conversation Continuity: Bot tracks its own messages for improved conversation flow
- πΎ Persistent State Management: Robust storage of conversation context and world state
- π€ AI-Powered Decision Making: Intelligent response generation with dynamic tool awareness
- π Advanced Rate Limiting: Smart rate limiting with backoff and quota management
- π Thread Management: Intelligent conversation thread tracking and context preservation
- π§ Matrix Room Management: Auto-join functionality with invite handling
- π± Enhanced User Profiling: Rich user metadata tracking for social platforms
The system has been architected with a dynamic tool-based design for maximum extensibility and maintainability. The architecture follows a layered approach with clear separation of concerns.
- ToolRegistry: Manages dynamic tool registration and provides AI-ready descriptions
- ToolInterface: Abstract base class for all tools with standardized execution patterns
- ActionContext: Comprehensive dependency injection for tools (observers, configurations, state managers)
- WorldStateManager: Central state coordinator with advanced message deduplication
- Message & Channel Models: Rich data models supporting multi-platform message metadata
- Thread Management: Intelligent conversation threading for platforms like Farcaster
- Rate Limiting: Built-in rate limit tracking and enforcement
- ContextAwareOrchestrator: Main coordinator using the tool system with intelligent cycle management
- AIDecisionEngine: Updated to receive dynamic tool descriptions and optimized payloads
- Context Manager: Advanced conversation context preservation and retrieval
- Matrix Integration: Full Matrix protocol support with room management and invite handling
- Farcaster Integration: Complete Farcaster API integration with enhanced user profiling
- Standardized Interfaces: Unified message and action handling across platforms
All platform interactions are handled through standardized tools with consistent interfaces:
WaitTool- Intelligent observation and waiting actions with configurable intervalsObserveTool- Advanced world state observation with filtering and summarization
SendMatrixReplyTool- Matrix reply functionality with thread context awarenessSendMatrixMessageTool- Matrix message sending with formatting supportJoinMatrixRoomTool- Automated room joining with invite acceptance
SendFarcasterPostTool- Farcaster posting with media support and rate limitingSendFarcasterReplyTool- Farcaster replying with thread context preservationLikeFarcasterPostTool- Social engagement actions with deduplicationQuoteFarcasterPostTool- Quote casting with content attributionFollowFarcasterUserTool- User following functionalitySendFarcasterDirectMessageTool- Private messaging capabilities
- π Extensibility: Add new tools by implementing
ToolInterfaceand registering - π§Ή Maintainability: Platform logic isolated in dedicated tool classes with clear boundaries
- π§ͺ Testability: Clean dependency injection via
ActionContextenables comprehensive testing - π€ AI Integration: Tool descriptions automatically update AI capabilities and decision-making
- π Consistency: Standardized parameter schemas and error handling across all tools
- β‘ Performance: Optimized payload generation and intelligent message filtering
- π Reliability: Robust error handling, rate limiting, and state consistency
- Setup environment:
cp env.example .env
nano .env # Fill in your API keys and credentials- Deploy with Docker:
./scripts/deploy.sh- Monitor logs:
docker-compose logs -f chatbot- Install dependencies:
pip install -e .- Configure environment:
cp env.example .env
nano .env # Add your credentials- Run the system:
python -m chatbot.mainpoetry install
poetry run python -m chatbot.mainThe system implements a sophisticated world state management approach that maintains comprehensive awareness of all platform activities and conversations.
- Multi-Platform Messages: Unified message model supporting Matrix and Farcaster with platform-specific metadata
- Rich User Profiles: Enhanced user information including follower counts, bios, profile pictures, and verification badges
- Deduplication: Advanced message deduplication across channels and platforms to prevent processing duplicates
- Thread Tracking: Intelligent conversation thread management for platforms supporting threaded discussions
- Dynamic Channel Creation: Automatic channel discovery and registration as the bot encounters new rooms/feeds
- Activity Summarization: Real-time activity summaries with user engagement metrics and timestamp ranges
- Matrix Room Metadata: Complete room information including topics, member counts, power levels, and encryption status
- Invite Management: Pending Matrix room invites with automated acceptance workflows
- Comprehensive Logging: Complete audit trail of all bot actions with parameters and results
- Action Deduplication: Prevents duplicate actions (likes, replies, follows) with intelligent tracking
- Scheduled Action Updates: Support for updating scheduled/pending actions with final results
- Rate Limit Integration: Action history informs rate limiting decisions and backoff strategies
- Primary Channel Focus: Detailed information for the active conversation channel
- Smart Summarization: Intelligent summarization of secondary channels to reduce token usage
- User Context Filtering: Bot's own messages are filtered out to focus on external interactions
- Configurable Truncation: Adjustable limits for messages, actions, and thread history based on AI model constraints
- Efficient Updates: Incremental state updates with minimal memory footprint
- Background Processing: Non-blocking state updates that don't interrupt conversation flow
- Smart Caching: Intelligent caching of frequently accessed state components
- Memory Management: Automatic cleanup of old messages and actions to prevent memory bloat
The world state provides rich analytics for understanding conversation patterns and bot performance:
- Conversation Metrics: Message frequency, user engagement, and response patterns
- Platform Activity: Cross-platform activity correlation and user behavior analysis
- Bot Performance: Action success rates, response times, and error patterns
- Social Dynamics: User interaction patterns, thread participation, and engagement quality
The system uses a centralized configuration approach with environment-based settings that support both development and production deployments.
AI_MODEL=openai/gpt-4o-mini # Primary AI model
OPENROUTER_API_KEY=your_key_here # OpenRouter API access
PRIMARY_LLM_PROVIDER=openrouter # LLM provider selection
OLLAMA_API_URL=http://localhost:11434 # Local Ollama instance (optional)# Matrix Configuration
MATRIX_HOMESERVER=https://matrix.org # Matrix homeserver URL
MATRIX_USER_ID=@bot:matrix.org # Bot's Matrix user ID
MATRIX_PASSWORD=secure_password # Matrix account password
MATRIX_ROOM_ID=#room:matrix.org # Default monitoring room
# Farcaster Configuration (Optional)
NEYNAR_API_KEY=your_neynar_key # Neynar API for Farcaster
FARCASTER_BOT_FID=12345 # Bot's Farcaster ID
FARCASTER_BOT_SIGNER_UUID=uuid_here # Signing key for posts
FARCASTER_BOT_USERNAME=botname # Bot username for filtering# Core System
OBSERVATION_INTERVAL=2.0 # Seconds between observation cycles
MAX_CYCLES_PER_HOUR=300 # Rate limiting for AI cycles
CHATBOT_DB_PATH=chatbot.db # Database file location
# AI Payload Optimization
AI_CONVERSATION_HISTORY_LENGTH=10 # Messages per channel for AI
AI_ACTION_HISTORY_LENGTH=5 # Action history depth
AI_THREAD_HISTORY_LENGTH=5 # Thread message depth
AI_OTHER_CHANNELS_SUMMARY_COUNT=3 # Secondary channels to include
AI_INCLUDE_DETAILED_USER_INFO=true # Full user metadata vs summaryThe configuration system supports:
- Environment Variable Override: All settings can be overridden via environment variables
- Development vs Production: Different configurations for different deployment environments
- Secrets Management: Secure handling of API keys and credentials
- Runtime Reconfiguration: Some settings can be adjusted without restart (future enhancement)
If you're using a dev container and need Docker support:
- Configure Docker support:
./scripts/setup-docker.sh- Rebuild your dev container:
- Open VS Code Command Palette (Ctrl+Shift+P)
- Run: "Dev Containers: Rebuild Container"
The project includes comprehensive testing infrastructure to ensure reliability and maintainability.
- Core Component Tests: Complete coverage of world state, AI engine, and orchestrator components
- Tool System Tests: Individual tool testing with mocked dependencies
- Integration Tests: End-to-end testing of platform integrations
- Configuration Tests: Validation of configuration loading and environment handling
tests/
βββ test_ai_engine.py # AI decision engine testing
βββ test_core.py # Core component unit tests
βββ test_orchestrator_extended.py # Orchestrator integration tests
βββ test_world_state_extended.py # World state management tests
βββ test_tool_system.py # Tool registry and execution tests
βββ test_matrix_tools_and_observer.py # Matrix platform integration
βββ test_farcaster_tools_follow_dm.py # Farcaster platform features
βββ test_integration.py # Full system integration tests
βββ test_robust_json_parsing.py # AI response parsing reliability
- Black: Consistent code formatting across the entire codebase
- isort: Import statement organization and optimization
- flake8: Code style enforcement and basic linting
- mypy: Static type checking for improved reliability
- pytest: Comprehensive test framework with async support
# Run tests with coverage
poetry run pytest tests/ --cov=chatbot --cov-report=html --cov-report=term
# View HTML coverage report
open htmlcov/index.html# Main application
poetry run python -m chatbot.main
# Testing
poetry run pytest tests/ -v # Run all tests
poetry run pytest tests/ --cov=chatbot # With coverage
# Code Quality
poetry run black chatbot/ && poetry run isort chatbot/ # Format code
poetry run flake8 chatbot/ && poetry run mypy chatbot/ # Lint and type check
# Development Tools
poetry run python control_panel.py # Control panel interfaceThe project includes pre-configured VS Code tasks for common operations:
- Run Chatbot Main Application: Starts the main bot with background execution
- Run Control Panel: Launches the web-based control interface
- Run Tests: Executes the full test suite
- Run Tests with Coverage: Tests with HTML coverage reporting
- Format Code: Applies Black and isort formatting
- Lint Code: Runs flake8 and mypy validation
# Check Matrix connectivity
grep "matrix_connected" chatbot.log
# Verify Farcaster API access
grep "farcaster_connected" chatbot.log
# Monitor rate limiting
grep "rate_limit" chatbot.log# Check world state consistency
grep "WorldState:" chatbot.log
# Monitor message deduplication
grep "Deduplicated message" chatbot.log
# Track action execution
grep "Action completed" chatbot.log# Set debug level logging
export LOG_LEVEL=DEBUG
# Monitor specific components
grep "ContextAwareOrchestrator" chatbot.log
grep "ToolRegistry" chatbot.log
grep "WorldStateManager" chatbot.logThe system includes a web-based control panel for real-time monitoring:
poetry run python control_panel.py
# Access at http://localhost:5000Features:
- Real-time State Monitoring: Live view of world state and recent activities
- Action History: Complete audit trail of bot actions and results
- Platform Status: Connection status and health metrics for all platforms
- Configuration Viewer: Current configuration settings and environment variables
from chatbot.tools.base import ToolInterface, ActionContext
from typing import Dict, Any
class CustomTool(ToolInterface):
@property
def name(self) -> str:
return "custom_action"
@property
def description(self) -> str:
return "Performs a custom action with specified parameters"
@property
def parameters_schema(self) -> Dict[str, Any]:
return {
"parameter1": "string (description of parameter)",
"parameter2": "integer (another parameter description)"
}
async def execute(self, params: Dict[str, Any], context: ActionContext) -> Dict[str, Any]:
# Implementation here
return {
"status": "success",
"message": "Action completed successfully",
"timestamp": time.time()
}
# Register the tool
registry.register_tool(CustomTool())- Parameter Validation: Always validate input parameters before execution
- Error Handling: Provide meaningful error messages and proper exception handling
- State Updates: Use the context to update world state appropriately
- Rate Limiting: Respect platform rate limits and implement backoff strategies
- Logging: Include comprehensive logging for debugging and monitoring
- Observer Implementation: Create an observer class for the new platform
- Tool Development: Implement platform-specific tools following the ToolInterface
- Message Model Extensions: Extend the Message class for platform-specific metadata
- Configuration Updates: Add necessary configuration parameters
- Integration Testing: Develop comprehensive tests for the new platform
- Message Rotation: Automatic cleanup of old messages to prevent memory bloat
- Action History Limits: Configurable limits on action history retention
- State Compression: Efficient serialization and storage of world state
- Garbage Collection: Proactive cleanup of unused objects and references
- Async Architecture: Fully asynchronous design for maximum concurrency
- Batch Processing: Efficient batch processing of multiple messages
- Smart Filtering: Intelligent filtering to reduce unnecessary processing
- Cache Optimization: Strategic caching of frequently accessed data
- Cycle Performance: Monitoring of observation and decision cycle times
- Platform Health: Connection status and API response times for all platforms
- Action Success Rates: Tracking of tool execution success and failure rates
- Memory Usage: Monitoring of world state size and memory consumption
- Rate Limit Status: Real-time tracking of API rate limit utilization
The system supports integration with external monitoring solutions:
- Structured Logging: JSON-formatted logs for easy parsing and analysis
- Metrics Export: Prometheus-compatible metrics endpoints (future enhancement)
- Health Checks: HTTP health check endpoints for load balancer integration
- Alert Integration: Support for webhook-based alerting systems
- Follow the existing code style enforced by Black and isort
- Maintain type hints for all public interfaces
- Include comprehensive docstrings for all classes and methods
- Write tests for all new functionality
- Fork & Branch: Create a feature branch from the main branch
- Development: Implement changes following code standards
- Testing: Ensure all tests pass and add tests for new features
- Documentation: Update documentation for any API or configuration changes
- Review: Submit pull request with clear description of changes
- Tool-Based Extensions: New functionality should be implemented as tools when possible
- Platform Abstraction: Maintain clean separation between platform-specific and core logic
- Configuration Driven: New features should be configurable rather than hard-coded
- Backwards Compatibility: Maintain backwards compatibility for configuration and APIs
- Documentation: Check this README and inline code documentation first
- Issues: Report bugs and feature requests via GitHub issues
- Discussions: Use GitHub discussions for questions and community support
- Contributing: See the contributing guidelines above for development questions
The project follows semantic versioning (SemVer):
- Major versions: Breaking changes to APIs or configuration
- Minor versions: New features and enhancements
- Patch versions: Bug fixes and security updates
Current version: 0.1.0 (Initial release with core functionality)
- Validate setup:
./scripts/validate-docker.shRequired environment variables:
MATRIX_HOMESERVER- Your Matrix server URLMATRIX_USER_ID- Bot's Matrix user IDMATRIX_PASSWORD- Bot's Matrix passwordOPENROUTER_API_KEY- OpenRouter API key for AI inferenceNEYNAR_API_KEY- (Optional) Farcaster API key
- Start services:
docker-compose up -d - View logs:
docker-compose logs -f chatbot - Stop services:
docker-compose down - Restart bot:
docker-compose restart chatbot - Shell access:
docker-compose exec chatbot bash - Web interface: http://localhost:8000 (if enabled)
chatbot/core/- Core system components (orchestrator, world state, context management, AI engine)chatbot/tools/- Dynamic tool system with registry and implementationschatbot/integrations/- Platform observers (Matrix, Farcaster)chatbot/storage/- Data persistence layer
To add a new tool:
- Create the tool class:
from chatbot.tools.base import ToolInterface, ActionContext
class MyCustomTool(ToolInterface):
@property
def name(self) -> str:
return "my_custom_action"
@property
def description(self) -> str:
return "Description of what this tool does"
@property
def parameters_schema(self) -> Dict[str, Any]:
return {
"param1": "string (description)",
"param2": "int (description)"
}
async def execute(self, params: Dict[str, Any], context: ActionContext) -> Dict[str, Any]:
# Implementation here
return {"success": True, "message": "Action completed"}- Register the tool:
# In orchestrator's _initialize_tools method
self.tool_registry.register_tool(MyCustomTool())The AI will automatically receive the tool description and can use it in decisions.
Run tests with:
pytestFor coverage reports:
pytest --cov=chatbotA web-based control panel is available for monitoring and managing the system:
python control_panel.pyVisit http://localhost:5000 to access the control panel.