Bring your exported ChatGPT conversations into a tiny, local memory and explore them with a minimal AeonCore CLI.
- Parse a
conversations.jsonexport from ChatGPT - Generate text embeddings with a local model (default) or OpenAI
- Build a FAISS index for fast similarity search
- Chat with a lightweight AeonCore REPL (optional)
- Hack on top of a simple Typer-based CLI
- Legacy folders and experimental modules — see
_CLEANUP_TODO.md - Cloud deployments, full LoopDesk apps, or production hardening
- Proprietary models or services beyond what you configure yourself
- Python 3.10+
sentence-transformersandfaissfor local embeddings (CPU or GPU)- Optional:
openaipackage and API key for OpenAI embeddings - Optional: Ollama running locally for the chat REPL
git clone https://example.com/aeoncore-mvp.git
cd aeoncore-mvp
python3 -m venv .venv
source .venv/bin/activate # Windows: .venv\\Scripts\\Activate.ps1
pip install -e .✅ Supported input: conversations.json export from ChatGPT (Go to Settings → Data Controls → Export Data, then unzip and point AeonCore MVP at the conversations.json file.)
❌ Not supported (for now): other JSON formats (Slack, Discord, Gmail, etc.). This is intentional — we focused the hackathon MVP on one clean path end-to-end.
💡 Why? Because hackathon time is short — we scoped narrowly so we could actually deliver something that works out of the box. Future formats can be added later.
🧪 Tested on:
Fresh macOS (MacBook Air, Python 3.11 via Homebrew)
Fresh Linux (Ubuntu, Python 3.11)
Both verified with a clean git clone + pip install -e .
- Export your ChatGPT data
In ChatGPT go to Settings → Data Controls → Export Data and follow the email link to downloadconversations.json(see OpenAI Help Center: "How do I export my ChatGPT history and data?"). - Ingest the export
python aeoncore/ingestion/chat_history_ingestor.py path/to/conversations.json \ --output dynamic_memory/chat_history_dump.jsonl - Vectorize chats
python scripts/build_faiss_index.py
- Chat with Aeon (optional; requires Ollama)
Sample query:
aeon chat start
What did I talk about last week?
Exit with:quit.
aeoncore-mvp/
├── aeoncore/ingestion/ # ChatGPT export parser
├── scripts/ # Utility scripts (FAISS builder)
├── src/aeon/cli/ # Typer CLI entrypoints
├── src/aeon/core/loop_barometer/ # Embedding providers
└── dynamic_memory/ # Created at runtime for vectors & memory
The embedding CLI reads provider settings from environment variables:
export EMBED_PROVIDER=oss20b # or openai
export EMBED_MODEL=oss-20b # or text-embedding-3-small| Command | Description |
|---|---|
python aeoncore/ingestion/chat_history_ingestor.py <in> --output <out> |
Convert ChatGPT export to JSONL |
python scripts/build_faiss_index.py |
Build dynamic_memory/chat_history.faiss |
aeon hello |
Sanity check for the CLI |
aeon embed input.txt |
Embed each line of input.txt |
aeon chat start |
Interactive REPL (needs Ollama) |
conversations.jsonnot found – check the path and that the export was unzipped.ModuleNotFoundError: sentence_transformers– installsentence-transformersor setEMBED_PROVIDER=openai.faissimport errors – installfaiss-cpu(orfaiss-gpu) matching your platform.- Long paths on Windows – run
git config --system core.longpaths true. - Proxy/SSL errors during
pip install– ensure you have network access or configure proxy settings.
License: see repository maintainers.
Built with Typer, sentence-transformers, and FAISS.