Warning
This project is in its early stages of development.
- ✨ Introduction
- 🌟 Key Features
- 🛠️ Technologies Used
- 🏗️ Production Deployment
- 🧑💻 Local Development
- 📄 API Documentation
- 🗺️ Project Structure
- 🤝 Contributing
- 🐛 Issues and Bug Reports
- 📄 License
Welcome to Meridian! Built on a graph-based architecture, Meridian goes beyond simple turn-taking, enabling richer, more nuanced, and highly efficient interactions.
Traditional AI chats often struggle with complex queries, requiring multiple back-and-forth interactions to gather information. Our innovative approach, deeply rooted in a dynamic graph structure, allows for intelligent parallelization of information gathering and processing. Imagine asking a question, and instead of a linear response, our system simultaneously consults multiple specialized AI models, synthesizing their insights into a coherent, comprehensive, and accurate answer.
This isn't just about speed; it's about depth, accuracy, and unlocking advanced AI workflows previously out of reach for consumer applications.
- Graph-Based AI Engine: At its core, Meridian leverages a sophisticated graph database to represent and process information. This allows for complex relationships between concepts, enabling more intelligent context retention and dynamic query planning.
- Parallel Query Processing: User prompts can be dispatched to multiple LLMs in parallel. Their responses are then intelligently combined by a final LLM, delivering a unified and comprehensive answer.
- Model Agnostic: Powered by OpenRouter, Meridian seamlessly integrates with various AI models, giving you the flexibility to select the optimal model for each scenario.
- Oauth & UserPass: Secure user authentication and management, ensuring that your data is protected and accessible only to you.
- Attachment Support: Users can upload attachments, enhancing the context and richness of conversations.
- Syntax Highlighting: Code snippets are displayed with syntax highlighting, making it easier to read and understand technical content.
- LaTeX Rendering: Mathematical expressions are rendered beautifully, allowing for clear communication of complex ideas.
- Chat Branching: Using the graph structure, Meridian supports branching conversations, enabling users to explore different paths and topics without losing context.
- Highly Customizable: Meridian is highly configurable, allowing you to tailor the system to your specific needs and preferences.
See a detailed overview of the features in the Features.md file.
- Frontend:
- Backend:
- Docker and Docker Compose installed on your machine.
- Yq (from Mike Farah) for TOML processing.
-
Clone the repository:
git clone git@github.com:MathisVerstrepen/Meridian.git cd Meridian/docker
-
Create a
config.toml
file: Copy theconfig.example.toml
file toconfig.toml
and customize it with your production settings.cp config.example.toml config.toml
Then set the necessary environment variables in the
config.toml
file. -
Start Meridian: Use the provided bash script to start the Docker services. This will start the two databases (PostgreSQL and Neo4j), the backend API server, and the frontend application.
chmod +x run.sh ./run.sh -d
-
Access the application: Open your web browser and navigate to
http://localhost:3000
(default port) to access the Meridian frontend.
- Docker and Docker Compose installed on your machine.
- Yq (from Mike Farah) for TOML processing.
- Python 3.11 or higher installed on your machine.
- Node.js and npm installed on your machine for the frontend.
-
Clone the repository:
git clone git@github.com:MathisVerstrepen/Meridian.git cd Meridian/docker
-
Create a
config.local.toml
file: Copy theconfig.local.example.toml
file toconfig.local.toml
and customize it with your settings.cp config.example.toml config.local.toml
Then set the necessary environment variables in the
config.local.toml
file. -
Start the databases: Use the provided bash script to start the Docker services. This will start the two databases (PostgreSQL and Neo4j).
chmod +x run.sh ./run.sh dev -d
-
Start the backend: Open a new terminal window and run the backend server using Docker Compose.
cd ../api python3 -m venv venv source venv/bin/activate pip install -r requirements.txt cd app fastapi dev main.py
-
Start the frontend: Open another terminal window and run the frontend server.
cd ../ui npm install npm run dev
-
Access the application: Open your web browser and navigate to
http://localhost:3000
(default port) to access the Meridian frontend.
The backend API documentation (powered by FastAPI's Swagger UI) will be available at:
http://localhost:8000/docs
(when the backend is running).
Meridian/
├── docker/ # Docker-related files and configurations files
├── api/ # Backend API code
├── ui/ # Frontend code
├── docs/ # Documentation files
├── README.md # Project overview and setup instructions
We welcome contributions to Meridian! Whether it's adding new features, improving existing ones, or fixing bugs, your help is appreciated.
Found a bug or have a feature request? Please open an issue on our GitHub Issues page.
This project is licensed under the MIT License - see the LICENSE file for details.
Made with ❤️ by Mathis Verstrepen