A Python-based knowledge graph API designed for agentic AI systems with local embeddings.
- Local-First Embeddings: Uses sentence-transformers, Ollama, or other local models by default
- Agentic Skills Interface: Expose operations via Model Context Protocol (MCP)
- Temporal Awareness: Bi-temporal data model tracking event time and ingestion time
- Hybrid Search: Combines semantic search, BM25 keyword search, and graph traversal
- Privacy-Focused: No telemetry, no external dependencies (except optional)
Required:
Optional (but recommended):
- Docker - For Neo4j + Ollama (easiest setup)
- Ollama - For local LLM support (auto-installable via
just)
Note: The project defaults to local embeddings (sentence-transformers) which don't require any external services. Ollama is only needed if you want to use local LLMs for entity extraction.
# Clone repository
git clone <repository-url>
cd knowledge
# Check system requirements
just doctor
# One-command setup (installs deps, starts services, inits db)
just setup
# Pull Ollama models (downloaded to Docker container)
just ollama-pull# Clone repository
git clone <repository-url>
cd knowledge
# Install Ollama and Python dependencies
just setup-local
# In a separate terminal, start Ollama
just ollama-serve
# Pull models
just ollama-pull
# Start Neo4j separately (or use Docker for just Neo4j)
# Then initialize database
just init-db# Install dependencies
just install
# Option A: Use Docker for services
just services-up # Starts Neo4j + Ollama
just ollama-pull # Pull models from Docker Ollama
# Option B: Install Ollama locally
just ollama-install # Installs Ollama (Linux/macOS)
just ollama-serve # Start Ollama (separate terminal)
just ollama-pull # Download models
# Initialize database
just init-db# System
just # Show all available commands
just doctor # Check system requirements and status
# Development
just dev # Format, lint, test (quick check)
just check # Full check (format, lint, typecheck, test)
# Testing
just test-unit # Unit tests only (fast)
just test-integration # Integration tests (requires services)
just test # All tests
just test-coverage # Tests with coverage report
# Code quality
just format # Format code with black + ruff
just format-black # Format with black only
just format-ruff # Format with ruff only
just lint # Lint code with ruff
just typecheck # Type check with mypy + pyright
just typecheck-mypy # Type check with mypy only
just typecheck-pyright # Type check with pyright only
just security # Security check with bandit
just pre-commit # Run all pre-commit hooks
just install-hooks # Install git pre-commit hooks
# Ollama
just ollama-check # Check if Ollama is installed
just ollama-install # Install Ollama (Linux/macOS)
just ollama-serve # Start Ollama service
just ollama-pull # Download required models
# Services
just services-up # Start Docker services (Neo4j + Ollama)
just services-down # Stop Docker services
just services-logs # View service logs
# Running
just run-mcp # Run MCP serverknowledge/
├── core/ # Core library
│ ├── embedder/ # Embedding providers
│ ├── llm/ # LLM clients
│ ├── graph/ # Graph database operations
│ ├── search/ # Search implementations
│ ├── ingestion/ # Ingestion pipeline
│ └── utils/ # Utilities
├── mcp/ # MCP server for agentic skills
├── api/ # REST API (optional)
├── tests/ # Test suite
│ ├── unit/ # Unit tests
│ └── integration/ # Integration tests
└── examples/ # Usage examples
See mcp/config.yaml for configuration options:
embedder:
provider: sentence-transformers
model: all-MiniLM-L6-v2
llm:
provider: ollama
model: deepseek-r1:7b
database:
type: neo4j
uri: bolt://localhost:7687- CLAUDE.md - Developer guide for working with this codebase
- IMPLEMENTATION_PLAN.md - Detailed implementation roadmap
cd mcp
uv run python server.pyfrom core.embedder.sentence_transformers import SentenceTransformerEmbedder
from core.graph.neo4j_driver import Neo4jDriver
# Initialize components
embedder = SentenceTransformerEmbedder()
driver = Neo4jDriver(uri="bolt://localhost:7687", user="neo4j", password="password")
# Generate embeddings
texts = ["Hello world", "Knowledge graph"]
embeddings = await embedder.create_batch(texts)MIT
Contributions welcome! Please read the development guidelines in CLAUDE.md.