Create custom Telegram bots through natural conversation, powered by AI.
Turn your ideas into fully functional Telegram bots in minutes - just describe what you want in plain English, and our AI will generate production-ready code for you!
- Conversational Bot Creation - Describe your bot in natural language
- AI-Powered Code Generation - Real, production-ready code (not templates!)
- Multiple Bot Types - Expense trackers, quizzes, reminders, and more
- Instant Deployment - Get a ready-to-run ZIP file
- Iterative Refinement - Request changes and improvements
- LLM-Powered - Uses Ollama (local), Gemini, OpenAI, or Claude
- Modern Framework - Built with aiogram 3.x
- FSM State Management - Proper conversation flow handling
- Input Validation - Secure string sanitization and validation
- Error Recovery - Robust partial-success handling
- MCP Architecture - Modular microservice design
- Expense Trackers - With categories, stats, CSV export
- Todo Lists - Task management with priorities
- Quiz Bots - Trivia games with scoring
- Reminder Bots - Schedule notifications
- Echo Bots - Simple message responses
- Custom Bots - Anything you can describe!
- Python 3.10+
- uv package manager
- Ollama (for local LLM) OR API key for Gemini/OpenAI
1. Clone the repository:
git clone <repository-url>
cd mcp-telegram-bot-v12. Install dependencies:
uv sync3. Set up Ollama (Recommended - Free & Unlimited!):
# Install Ollama (macOS)
brew install ollama
# Pull the code generation model
ollama pull qwen2.5-coder:latest
# Ollama will auto-start in the backgroundAlternative: Use Gemini or OpenAI instead (see Configuration below)
4. Configure environment:
cp .env.example .env
# Edit .env with your settingsRequired configuration:
# Your BotGenerator bot token from @BotFather
BOT_TOKEN=your_telegram_bot_token
# LLM Configuration (Ollama - Local & Free!)
LLM_PROVIDER=ollama
LLM_MODEL=qwen2.5-coder:latest
# Code generation LLM
CODEGEN_LLM_PROVIDER=ollama
CODEGEN_LLM_MODEL=qwen2.5-coder:latest5. Run the bot:
cd bot_generator
uv run python main.py6. Start creating bots!
- Open Telegram
- Find your bot
- Send
/create - Follow the conversation!
Conversation with BotGenerator:
You: /create
Bot: Let's create your Telegram bot!
What kind of bot would you like to build?
You: I want an expense tracking bot with categories
Bot: Great! A few questions:
1. Should users be able to export data (like CSV)?
2. Do you need statistics/reports?
3. Any specific commands you want?
You: Yes to export, yes to stats, and I want /add, /list, and /stats commands
Bot: Perfect! Here's what I understand:
- Expense tracker bot
- Category support
- CSV export
- Statistics/reports
- Commands: /add, /list, /stats
Does this sound right?
You: Yes!
Bot: Excellent! Now I need your bot token from @BotFather.
(If you don't have one, message @BotFather and create a new bot)
You: 123456789:ABCdefGHIjklMNOpqrsTUVwxyz123456789
Bot: Token validated for @my_expense_bot!
Generating your bot now...
This will take about 10-15 seconds. I'm:
1. Analyzing your requirements
2. Designing the architecture
3. Generating code files
4. Validating everything
Please wait...
Bot: Your bot is ready!
Package: expense_tracker_bot.zip (12 files generated)
[ZIP file attachment]
**Next steps:**
1. Extract the ZIP file
2. Install: pip install -r requirements.txt
3. Run: python main.py
Your bot will be live! Need any changes?
┌─────────────────────────────────────────────────┐
│ User (Telegram) │
└───────────────────┬─────────────────────────────┘
│
┌───────────────────▼─────────────────────────────┐
│ BotGenerator (Main Application) │
│ ┌─────────────────────────────────────────┐ │
│ │ Conversational Orchestrator (Brain) │ │
│ │ • LLM-powered conversation │ │
│ │ • State management (FSM) │ │
│ │ • Workflow coordination │ │
│ └───────────────┬─────────────────────────┘ │
└───────────────────┼─────────────────────────────┘
│
┌───────────┼───────────┐
│ │ │
┌───────▼────┐ ┌───▼────┐ ┌───▼─────────┐
│ LLM Service│ │MCP │ │ Session │
│ │ │Client │ │ Manager │
│ Ollama/ │ │ │ │ │
│ Gemini/ │ │ │ │ │
│ OpenAI │ │ │ │ │
└────────────┘ └───┬────┘ └─────────────┘
│
┌──────────────┼──────────────┐
│ │ │
┌───▼───┐ ┌──────▼─────┐ ┌────▼────┐
│Parser │ │Architecture│ │Generator│
│ │ │ Designer │ │(LLM- │
│ │ │ │ │powered) │
└───────┘ └────────────┘ └─────────┘
- Parser - Extract structured intent from natural language
- Architecture Designer - Design bot architecture
- Template Library - Find matching code patterns
- Code Generator - Generate production code with LLM
- Validator - Check syntax, logic, and security
Detailed Architecture: See Architecture Guide
LLM_PROVIDER=ollama
LLM_MODEL=qwen2.5-coder:latest
CODEGEN_LLM_PROVIDER=ollama
CODEGEN_LLM_MODEL=qwen2.5-coder:latestAdvantages:
- Unlimited usage
- No API costs
- Fast inference
- Privacy (runs locally)
LLM_PROVIDER=gemini
LLM_API_KEY=your_gemini_api_key
LLM_MODEL=gemini-2.5-flash
CODEGEN_LLM_PROVIDER=gemini
CODEGEN_LLM_API_KEY=your_gemini_api_key
CODEGEN_LLM_MODEL=gemini-2.5-flashLimits: 20 requests/day (free tier)
LLM_PROVIDER=openai
LLM_API_KEY=sk-proj-...
LLM_MODEL=gpt-4o-mini
CODEGEN_LLM_PROVIDER=openai
CODEGEN_LLM_API_KEY=sk-proj-...
CODEGEN_LLM_MODEL=gpt-4oNote: Requires paid API access
mcp-telegram-bot-v1/
├── bot_generator/ # Main application
│ ├── main.py # Entry point
│ ├── handlers/ # Telegram message handlers
│ │ ├── start.py # /start, /create commands
│ │ └── messages.py # Message processing
│ ├── services/ # Business logic
│ │ ├── conversational_orchestrator.py # AI conversation brain
│ │ ├── generator_service.py # Bot generation workflow
│ │ ├── llm_service.py # LLM provider abstraction
│ │ └── session_manager.py # User session management
│ ├── models/ # Data models
│ └── utils/ # Utilities
│ ├── mcp_client.py # MCP server communication
│ └── states.py # FSM states
│
├── mcp_servers/ # MCP microservices
│ ├── parser/ # Intent extraction
│ ├── architecture/ # Architecture design
│ ├── templates_lib/ # Code templates
│ ├── generator/ # Code generation
│ │ ├── llm_codegen.py # LLM-powered code gen
│ │ └── prompts/ # Generation prompts
│ └── validator/ # Code validation
│
├── shared/ # Shared utilities
│ ├── models.py # Shared data models
│ ├── utils.py # Helper functions
│ └── validation.py # Input validation & sanitization
│
├── .env # Configuration (create from .env.example)
└── requirements.txt # Python dependencies
- Prompt sanitization - Prevents DoS attacks
- Bot name sanitization - Prevents path traversal
- File name validation - Secure ZIP creation
- Token validation - Verifies with Telegram API
- Syntax validation - Python syntax checking
- Security scanning - Vulnerability detection
- Logic testing - Bot functionality verification
# Add tests here when implemented
pytest tests/# Format code
black bot_generator/ mcp_servers/ shared/
# Lint
flake8 bot_generator/ mcp_servers/ shared/
# Type checking
mypy bot_generator/ mcp_servers/ shared/Enable detailed logging:
# In main.py
logging.basicConfig(level=logging.DEBUG)When you create a bot, you'll receive a ZIP file with this structure:
your_bot_name/
├── main.py # Bot entry point
├── .env # Configuration (with your token)
├── requirements.txt # Dependencies
├── handlers/ # Command handlers
│ ├── add.py # /add command (with FSM)
│ ├── list.py # /list command
│ └── stats.py # /stats command
├── services/ # Business logic
│ └── expense_service.py # Data management
├── models/ # Data models
│ └── expense.py # Expense dataclass
└── data/ # Storage directory
└── expenses.json # Data file
Generated bots include:
- Complete FSM state management
- Input validation
- Error handling
- Database integration
- Logging
- Professional code structure
# Generated handler includes:
- FSM states (AddExpense.amount, AddExpense.category, etc.)
- Input validation (amount must be numeric)
- Category selection with inline keyboard
- Error messages in user-friendly format
- Database persistence
- Logging for debugging# Generated features:
- Question database management
- Score tracking
- Timer functionality
- Leaderboard
- Multiple choice questions
- Answer validation# Check if Ollama is running
ollama ps
# Restart Ollama
ollama serve# Get a new token from @BotFather
# Make sure to copy the entire token (numbers:letters)
# Token format: 123456789:ABCdefGHIjklMNOpqrsTUVwxyz# Reinstall dependencies
uv sync --force
# Or use pip
pip install -r requirements.txt# Check your .env configuration
# Verify API keys are correct
# Try switching providers (Ollama is most reliable)- LLM-powered code generation
- Multi-provider LLM support (Ollama, Gemini, OpenAI)
- Conversational bot creation
- Input validation and security
- Token validation with Telegram API
- Error recovery with partial success
- FSM state management
- Comprehensive test suite
- Rate limiting for LLM calls
- Metrics and monitoring
- True MCP protocol implementation
- Web interface
- Bot template marketplace
- Code iteration and refinement
- Multi-language support
- Bot hosting service
- Visual bot builder
Contributions welcome! Please:
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
Areas needing help:
- Additional bot type templates
- LLM prompt optimization
- Test coverage
- Documentation improvements
- Bug fixes
[Add your license here]
- aiogram - Modern Telegram Bot framework
- Ollama - Local LLM inference
- Google Gemini - AI model
- OpenAI - GPT models
- MCP Protocol - Microservice architecture inspiration
- Issues: GitHub Issues
- Documentation: Architecture Guide
- Quick Start: QUICKSTART.md
If you find this project useful, please consider giving it a star!
Made with care by [Your Name]
Empowering everyone to create Telegram bots, no coding required!