A memory-aware fact extraction and knowledge preservation system that combines advanced LLM capabilities with knowledge graph technology for intelligent, context-aware conversational AI.
This project implements an intelligent conversational agent that:
- Extracts and maintains structured knowledge from conversations
- Stores memories in a vector database for semantic search
- Manages entity relationships in a graph database
- Provides context-aware responses based on accumulated knowledge
- Ensures precision-focused information retrieval and preservation
- Memory Management: Persistent storage and retrieval of conversation history and facts
- Vector Search: Semantic similarity search using OpenAI embeddings
- Knowledge Graph: Entity and relationship storage using Neo4j
- LLM Integration: OpenRouter API for advanced language understanding
- Context Awareness: Responses grounded in stored memories and facts
- User Profiles: Per-user memory isolation and personalization
- Mem0 Memory System: Manages memory storage and retrieval
- OpenRouter API: Provides access to advanced LLMs (Trinity Large)
- Qdrant: Vector database for semantic search and embeddings
- Neo4j: Graph database for knowledge graph storage
- OpenAI: Text embedding generation
User Input
↓
Memory Search (Semantic)
↓
LLM with Context (System Prompt + Memories)
↓
Response Generation
↓
Store in Memory
- Python 3.8+
- Docker & Docker Compose
- OpenRouter API Key
- Environment variables configured
git clone https://github.com/sachin-kumar-2003/Knowledge_Graph.git
cd Knowledge_GraphOn Windows (PowerShell):
./venv.batOr manually:
python -m venv venv
venv\Scripts\activatepip install -r requirements.txtCreate a .env file in the root directory:
OPENROUTER_API_KEY=your_api_key_heredocker-compose -f docker-compose.graph.yml up -dThis starts:
- Qdrant (Vector DB):
localhost:6333 - Neo4j (Graph DB):
localhost:7687
- Port: 6333
- Function: Semantic search and embeddings storage
- URL:
bolt://localhost:7687 - Username:
neo4j - Password:
12345678(change in production) - Admin UI:
http://localhost:7474
- Provider: OpenRouter
- Embedding Model:
openai/text-embedding-3-small - LLM Model:
arcee-ai/trinity-large-preview:free - Temperature: 0 (deterministic responses)
python main.pyThen interact with the bot:
ask... What is my name?
Bot = Based on my memories, your name is rahul.
ask... What did we discuss earlier?
Bot = [Context-aware response based on stored memories]
- User enters a query
- System searches memories for relevant context
- LLM receives system prompt with retrieved memories
- Bot generates response with context awareness
- Conversation is stored for future reference
The bot operates with a specialized system prompt that:
- Identifies itself as a memory-aware fact extraction agent
- Uses retrieved memories to ground responses
- Maintains professional, analytical tone
- Signals uncertainty when appropriate
- Prioritizes accuracy over speculation
While this is currently a CLI application, the core functions are:
-
chat(message): Main interaction function- Input: User message
- Output: Context-aware response
- Side Effects: Stores conversation in memory
-
mem_client.search(): Search memories- Input: Query, user_id
- Output: Ranked list of relevant memories
-
mem_client.add(): Add to memory- Input: Message, user_id
- Output: Memory stored in vector and graph databases
Knowledge_Graph/
├── main.py # Main application
├── docker-compose.graph.yml # Docker services configuration
├── venv.bat # Virtual environment script (Windows)
├── freeze.bat # Dependency freeze script
├── .env # Environment variables (not in repo)
└── README.md # This file
Key Python packages:
mem0-ai: Memory management systemopenai: OpenAI client librarypython-dotenv: Environment variable management- Neo4j and Qdrant clients (included with mem0)
Install all dependencies:
pip install mem0-ai openai python-dotenv- Ensure Docker services are running:
docker-compose -f docker-compose.graph.yml ps - Verify Qdrant is accessible:
curl http://localhost:6333/health - Check Neo4j connection:
http://localhost:7474
- Verify
OPENROUTER_API_KEYis set in.env - Check API key validity on OpenRouter dashboard
- Confirm Docker volumes are properly mounted
- Check database logs:
docker-compose logs neo4jordocker-compose logs qdrant
- Change Neo4j default password (
12345678) - Secure API keys in environment variables
- Use environment-specific
.envfiles - Implement proper authentication for graph access
- Consider network isolation for databases
- REST API wrapper for external access
- Multi-user authentication system
- Advanced graph querying and visualization
- Memory pruning and optimization
- Support for additional LLM providers
- Conversation export and analysis tools
- Web UI for interaction
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch
- Commit changes with clear messages
- Submit a pull request
This project is open source and available under the MIT License.
Sachin Kumar - sachin-kumar-2003
For issues, questions, or suggestions:
- Open an issue on GitHub
- Check existing documentation
- Review the troubleshooting section
- Mem0 - Memory management framework
- OpenRouter - LLM API provider
- Qdrant - Vector database
- Neo4j - Graph database