Skip to content

sachin-kumar-2003/Knowledge_Graph

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Knowledge Graph

A memory-aware fact extraction and knowledge preservation system that combines advanced LLM capabilities with knowledge graph technology for intelligent, context-aware conversational AI.

first

second

third fourth

Overview

This project implements an intelligent conversational agent that:

  • Extracts and maintains structured knowledge from conversations
  • Stores memories in a vector database for semantic search
  • Manages entity relationships in a graph database
  • Provides context-aware responses based on accumulated knowledge
  • Ensures precision-focused information retrieval and preservation

Features

  • Memory Management: Persistent storage and retrieval of conversation history and facts
  • Vector Search: Semantic similarity search using OpenAI embeddings
  • Knowledge Graph: Entity and relationship storage using Neo4j
  • LLM Integration: OpenRouter API for advanced language understanding
  • Context Awareness: Responses grounded in stored memories and facts
  • User Profiles: Per-user memory isolation and personalization

Architecture

Components

  1. Mem0 Memory System: Manages memory storage and retrieval
  2. OpenRouter API: Provides access to advanced LLMs (Trinity Large)
  3. Qdrant: Vector database for semantic search and embeddings
  4. Neo4j: Graph database for knowledge graph storage
  5. OpenAI: Text embedding generation

Data Flow

User Input 
    ↓
Memory Search (Semantic)
    ↓
LLM with Context (System Prompt + Memories)
    ↓
Response Generation
    ↓
Store in Memory

Prerequisites

  • Python 3.8+
  • Docker & Docker Compose
  • OpenRouter API Key
  • Environment variables configured

Installation

1. Clone the Repository

git clone https://github.com/sachin-kumar-2003/Knowledge_Graph.git
cd Knowledge_Graph

2. Set Up Virtual Environment

On Windows (PowerShell):

./venv.bat

Or manually:

python -m venv venv
venv\Scripts\activate

3. Install Dependencies

pip install -r requirements.txt

4. Configure Environment Variables

Create a .env file in the root directory:

OPENROUTER_API_KEY=your_api_key_here

5. Start Services with Docker

docker-compose -f docker-compose.graph.yml up -d

This starts:

  • Qdrant (Vector DB): localhost:6333
  • Neo4j (Graph DB): localhost:7687

Configuration

Qdrant Vector Database

  • Port: 6333
  • Function: Semantic search and embeddings storage

Neo4j Graph Database

  • URL: bolt://localhost:7687
  • Username: neo4j
  • Password: 12345678 (change in production)
  • Admin UI: http://localhost:7474

LLM Configuration

  • Provider: OpenRouter
  • Embedding Model: openai/text-embedding-3-small
  • LLM Model: arcee-ai/trinity-large-preview:free
  • Temperature: 0 (deterministic responses)

Usage

Running the Bot

python main.py

Then interact with the bot:

ask... What is my name?
Bot = Based on my memories, your name is rahul.

ask... What did we discuss earlier?
Bot = [Context-aware response based on stored memories]

How It Works

  1. User enters a query
  2. System searches memories for relevant context
  3. LLM receives system prompt with retrieved memories
  4. Bot generates response with context awareness
  5. Conversation is stored for future reference

System Prompt

The bot operates with a specialized system prompt that:

  • Identifies itself as a memory-aware fact extraction agent
  • Uses retrieved memories to ground responses
  • Maintains professional, analytical tone
  • Signals uncertainty when appropriate
  • Prioritizes accuracy over speculation

API Endpoints

While this is currently a CLI application, the core functions are:

  • chat(message): Main interaction function

    • Input: User message
    • Output: Context-aware response
    • Side Effects: Stores conversation in memory
  • mem_client.search(): Search memories

    • Input: Query, user_id
    • Output: Ranked list of relevant memories
  • mem_client.add(): Add to memory

    • Input: Message, user_id
    • Output: Memory stored in vector and graph databases

Project Structure

Knowledge_Graph/
├── main.py                      # Main application
├── docker-compose.graph.yml     # Docker services configuration
├── venv.bat                     # Virtual environment script (Windows)
├── freeze.bat                   # Dependency freeze script
├── .env                         # Environment variables (not in repo)
└── README.md                    # This file

Requirements

Key Python packages:

  • mem0-ai: Memory management system
  • openai: OpenAI client library
  • python-dotenv: Environment variable management
  • Neo4j and Qdrant clients (included with mem0)

Install all dependencies:

pip install mem0-ai openai python-dotenv

Troubleshooting

Connection Issues

  • Ensure Docker services are running: docker-compose -f docker-compose.graph.yml ps
  • Verify Qdrant is accessible: curl http://localhost:6333/health
  • Check Neo4j connection: http://localhost:7474

API Key Issues

  • Verify OPENROUTER_API_KEY is set in .env
  • Check API key validity on OpenRouter dashboard

Memory Not Persisting

  • Confirm Docker volumes are properly mounted
  • Check database logs: docker-compose logs neo4j or docker-compose logs qdrant

Security Considerations

⚠️ Important for Production:

  • Change Neo4j default password (12345678)
  • Secure API keys in environment variables
  • Use environment-specific .env files
  • Implement proper authentication for graph access
  • Consider network isolation for databases

Future Enhancements

  • REST API wrapper for external access
  • Multi-user authentication system
  • Advanced graph querying and visualization
  • Memory pruning and optimization
  • Support for additional LLM providers
  • Conversation export and analysis tools
  • Web UI for interaction

Contributing

Contributions are welcome! Please:

  1. Fork the repository
  2. Create a feature branch
  3. Commit changes with clear messages
  4. Submit a pull request

License

This project is open source and available under the MIT License.

Author

Sachin Kumar - sachin-kumar-2003

Support

For issues, questions, or suggestions:

  • Open an issue on GitHub
  • Check existing documentation
  • Review the troubleshooting section

Acknowledgments

About

An intelligent conversational AI system combining memory management, semantic search, and knowledge graphs for context-aware interactions. Built with Mem0, Neo4j, Qdrant, and OpenRouter for fact extraction and knowledge preservation.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors