A powerful and flexible application for managing and interacting with multiple Large Language Models (LLMs) through a unified interface.
-
🤖 Multi-Model Support: Seamlessly integrate with various LLM providers:
- Local models via LM Studio
- OpenAI models (GPT-3.5, GPT-4)
- Anthropic's Claude (Sonnet, Haiku)
- Deepseek models (Chat, Coder)
- Custom model implementations
-
💬 Advanced Chat Interface:
- Personalized chat agent with customizable name and personality
- Multi-session management
- Context-aware conversations
- File attachments support
- Real-time responses
-
📊 Vector Database Integration:
- Document embeddings with Weaviate
- Semantic search capabilities
- Efficient document management
- Automatic indexing
-
🤖 Agent System:
- Create and manage AI agents
- Tool integration
- Task automation
- Status monitoring
-
🛠️ Tool Management:
- Custom tool creation
- API integrations
- Function calling
- CLI command execution
-
🧠 Model Context Protocol (MCP):
- Context management
- Cross-model communication
- Context merging
- Response tracking
-
💻 CLI Interface:
- Command-line control
- System monitoring
- Agent management
- Quick actions
- Node.js 18 or higher
- npm or yarn
- A modern web browser
- ChromaDB v0.6.0 or higher
The application has been updated to support ChromaDB v0.6.0, which includes several important changes:
- Collection Listing: ChromaDB now returns only collection names from
list_collections()
instead of full collection objects. - Storage Backend: Uses SQLite by default instead of DuckDB+Parquet.
- Client Initialization: Uses the new
PersistentClient
class:client = chromadb.PersistentClient( path="./chroma_data", settings=Settings( anonymized_telemetry=False, allow_reset=True, is_persistent=True ) )
These changes improve reliability and performance while maintaining compatibility with existing data.
-
Clone the repository:
git clone https://github.com/yourusername/multi-llm-app.git cd multi-llm-app
-
Run the installation script:
install.bat
The installation script provides several options:
- Full Installation (All Components)
- Frontend Only (npm packages)
- Python Environment Only
- ChromaDB Setup Only
- Weaviate Setup Only
Choose the appropriate option based on your needs. For first-time setup, select "Full Installation".
-
Configure environment variables:
- Copy
.env.example
to.env
- Update the configuration values in
.env
- Copy
-
Start the application:
start.bat
This will start both the ChromaDB server and the frontend development server.
Note: If you encounter any issues during installation, you can run individual components of the installation process by selecting the appropriate option in install.bat
.
- Navigate to the Settings page
- Configure your LLM providers:
- LM Studio URL (for local models)
- OpenAI API key and model selection
- Claude API key and model selection (Sonnet/Haiku)
- Deepseek API key and model selection (Chat/Coder)
- Set your default provider for the chat agent
- Optional: Set up Weaviate for document embeddings
The chat agent can be customized through the Chat Configuration page:
-
Name: Set a custom name that the agent will respond to
-
Personality:
- Traits: Define characteristics (e.g., friendly, professional)
- Tone: Choose from professional, casual, friendly, or formal
- Style: Select concise, detailed, technical, or simple
- Constraints: Add specific behavioral rules
-
Language Model:
- Provider: Use the default from settings or choose another
- Model: Select from available models for the chosen provider
- Parameters: Adjust temperature and max tokens
The chat agent's configuration syncs with your settings:
- The default provider from settings is automatically selected
- Changing the provider in chat config updates the default in settings
- Only enabled providers from settings are available
The application is built with modern web technologies:
- Frontend: React, TypeScript, Tailwind CSS
- State Management: Zustand
- Routing: React Router
- Vector Database: Weaviate
- Embedding Storage: ChromaDB
- UI Components: Custom components with Lucide icons
Layout
: Main application structureChatInput/Message
: Chat interface componentsAgentCard/Form
: Agent managementToolCard/Form
: Tool configurationDocumentUploader/List
: Document handlingModal
: Reusable modal componentStatusCard/StatsCard
: Dashboard components
/
: Chat interface/embeddings
: Document embeddings/dashboard
: System overview/agents
: Agent management/tools
: Tool configuration/mcp
: Model Context Protocol/cli
: Command-line interface/settings
: System configuration
src/
├── components/ # Reusable UI components
├── pages/ # Application pages
├── lib/ # Utility functions and services
├── store/ # State management
├── types/ # TypeScript definitions
└── main.tsx # Application entry point
npm run build
npm run test
- Fork the repository
- Create a feature branch
- Commit your changes
- Push to the branch
- Create a Pull Request
The repository includes comprehensive .gitignore
settings for:
- Database files and data (chroma_data/, chroma_backup/)
- Python-specific files (pycache/, *.pyc)
- Development and build files (dist/, build/)
- Environment and configuration files (.env)
- Local development files (.vite/, .cache/)
- System and temporary files
This project is licensed under the MIT License - see the LICENSE file for details.
- Icons by Lucide
- UI inspiration from various modern web applications
- Community feedback and contributions
For support, please:
- Check the documentation
- Search existing issues
- Create a new issue if needed
Built with ❤️ by [Your Name/Organization]