Skip to content

PhilippMT/cognee

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6,340 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Cognee Logo

Cognee - Build AI memory with a Knowledge Engine that learns

Demo . Docs . Learn More · Join Discord · Join r/AIMemory . Community Plugins & Add-ons

GitHub forks GitHub stars GitHub commits GitHub tag Downloads License Contributors Sponsor

topoteretes%2Fcognee | Trendshift

Use our knowledge engine to build personalized and dynamic memory for AI Agents.

🌐 Available Languages : Deutsch | Español | Français | 日本語 | 한국어 | Português | Русский | 中文

Why cognee?

About Cognee

Cognee is an open-source knowledge engine that lets you ingest data in any format or structure and continuously learns to provide the right context for AI agents. It combines vector search, graph databases and cognitive science approaches to make your documents both searchable by meaning and connected by relationships as they change and evolve.

Help us reach more developers and grow the cognee community. Star this repo!

📚 Check our detailed documentation for setup and configuration.

🦀 Available as a plugin for your OpenClaw — cognee-openclaw

Why use Cognee:

  • Knowledge infrastructure — unified ingestion, graph/vector search, runs locally, ontology grounding, multimodal
  • Persistent and Learning Agents - learn from feedback, context management, cross-agent knowledge sharing
  • Reliable and Trustworthy Agents - agentic user/tenant isolation, traceability, OTEL collector, audit traits

Product Features

Cognee Products

Basic Usage & Feature Guide

To learn more, check out this short, end-to-end Colab walkthrough of Cognee's core features.

Open In Colab

Quickstart

Let’s try Cognee in just a few lines of code.

Prerequisites

  • Python 3.10 to 3.13

Step 1: Install Cognee

You can install Cognee with pip, poetry, uv, or your preferred Python package manager.

uv pip install cognee

Step 2: Configure the LLM

import os
os.environ["LLM_API_KEY"] = "YOUR OPENAI_API_KEY"

Alternatively, create a .env file using our template.

To integrate other LLM providers, see our LLM Provider Documentation.

Step 3: Run the Pipeline

Cognee will take your documents, load them into the knowledge angine and search combined vector/graph relationships.

Now, run a minimal pipeline:

import cognee
import asyncio
from pprint import pprint


async def main():
    # Add text to cognee
    await cognee.add("Cognee turns documents into AI memory.")

    # Add to knowledge engine
    await cognee.cognify()

    # Query the knowledge graph
    results = await cognee.search("What does Cognee do?")

    # Display the results
    for result in results:
        pprint(result)


if __name__ == '__main__':
    asyncio.run(main())

As you can see, the output is generated from the document we previously stored in Cognee:

  Cognee turns documents into AI memory.

Use the Cognee CLI

As an alternative, you can get started with these essential commands:

cognee-cli add "Cognee turns documents into AI memory."

cognee-cli cognify

cognee-cli search "What does Cognee do?"
cognee-cli delete --all

To open the local UI, run:

cognee-cli -ui

Examples

Browse more examples in the examples/ folder — demos, guides, custom pipelines, and database configurations.

Use Case 1 — Customer Support Agent

Goal: Resolve customer issues using their personal data across finance, support, and product history.

User: "My invoice looks wrong and the issue is still not resolved."

Cognee tracks: past interactions, failed actions, resolved cases, product history

# Agent response:
Agent: "I found 2 similar billing cases resolved last month.
        The issue was caused by a sync delay between payment
        and invoice systemsa fix was applied on your account."

# What happens under the hood:
- Unifies data sources from various company channels
- Reconstructs the interaction timeline and tracks outcomes
- Retrieves similar resolved cases
- Maps to the best resolution strategy
- Updates memory after execution so the agent never repeats the same mistake

Use Case 2 — Expert Knowledge Distillation (SQL Copilot)

Goal: Help junior analysts solve tasks by reusing expert-level queries, patterns, and reasoning.

User: "How do I calculate customer retention for this dataset?"

Cognee tracks: expert SQL queries, workflow patterns, schema structures, successful implementations

# Agent response:
Agent: "Here's how senior analysts solved a similar retention query.
        Cognee matched your schema to a known structure and adapted
        the expert's logic to fit your dataset."

# What happens under the hood:
- Extracts and stores patterns from expert SQL queries and workflows
- Maps the current schema to previously seen structures
- Retrieves similar tasks and their successful implementations
- Adapts expert reasoning to the current context
- Updates memory with new successful patterns so junior analysts perform at near-expert level

Deploy Cognee

Use Cognee Cloud for a fully managed experience, or self-host with one of the 1-click deployment configurations below.

Platform Best For Command
Cognee Cloud Managed service, no infrastructure to maintain Sign up
Modal Serverless, auto-scaling, GPU workloads bash distributed/deploy/modal-deploy.sh
Railway Simplest PaaS, native Postgres railway init && railway up
Fly.io Edge deployment, persistent volumes bash distributed/deploy/fly-deploy.sh
Render Simple PaaS with managed Postgres Deploy to Render button
Daytona Cloud sandboxes (SDK or CLI) See distributed/deploy/daytona_sandbox.py

See the distributed/ folder for deploy scripts, worker configurations, and additional details.

Latest News

Watch Demo

Community & Support

Contributing

We welcome contributions from the community! Your input helps make Cognee better for everyone. See CONTRIBUTING.md to get started.

Code of Conduct

We're committed to fostering an inclusive and respectful community. Read our Code of Conduct for guidelines.

Research & Citation

We recently published a research paper on optimizing knowledge graphs for LLM reasoning:

@misc{markovic2025optimizinginterfaceknowledgegraphs,
      title={Optimizing the Interface Between Knowledge Graphs and LLMs for Complex Reasoning},
      author={Vasilije Markovic and Lazar Obradovic and Laszlo Hajdu and Jovan Pavlovic},
      year={2025},
      eprint={2505.24478},
      archivePrefix={arXiv},
      primaryClass={cs.AI},
      url={https://arxiv.org/abs/2505.24478},
}

About

Memory for AI Agents in 5 lines of code

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • Python 93.5%
  • TypeScript 5.9%
  • Shell 0.3%
  • Dockerfile 0.2%
  • CSS 0.1%
  • Mako 0.0%