WLhI-yLBLWlfL8c6.mp4
A fully open-source alternative to NotebookLM, backed by LlamaCloud.
This project uses uv
to manage dependencies. Before you begin, make sure you have uv
installed.
On macOS and Linux:
curl -LsSf https://astral.sh/uv/install.sh | sh
On Windows:
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
For more install options, see uv
's official documentation.
1. Clone the Repository
git clone https://github.com/run-llama/notebookllama
cd notebookllama/
2. Install Dependencies
uv sync
3. Configure API Keys
First, create your .env
file by renaming the example file:
mv .env.example .env
Next, open the .env
file and add your API keys:
OPENAI_API_KEY
: find it on OpenAI PlatformELEVENLABS_API_KEY
: find it on ElevenLabs SettingsLLAMACLOUD_API_KEY
: find it on LlamaCloud Dashboard
4. Activate the Virtual Environment
(on mac/unix)
source .venv/bin/activate
(on Windows):
.\.venv\Scripts\activate
5. Create LlamaCloud Agent & Pipeline
You will now execute two scripts to configure your backend agents and pipelines.
First, create the data extraction agent:
uv run tools/create_llama_extract_agent.py
Next, run the interactive setup wizard to configure your index pipeline.
⚡ Quick Start (Default OpenAI): For the fastest setup, select "With Default Settings" when prompted. This will automatically create a pipeline using OpenAI's
text-embedding-3-small
embedding model.
🧠 Advanced (Custom Embedding Models): To use a different embedding model, select "With Custom Settings" and follow the on-screen instructions.
Run the wizard with the following command:
uv run tools/create_llama_cloud_index.py
6. Launch Backend Services
This command will start the required Postgres and Jaeger containers.
docker compose up -d
7. Run the Application
First, run the MCP server:
uv run src/notebookllama/server.py
Then, in a new terminal window, launch the Streamlit app:
streamlit run src/notebookllama/Home.py
Important
You might need to install ffmpeg
if you do not have it installed already
And start exploring the app at http://localhost:8501/
.
Contribute to this project following the guidelines.
This project is provided under an MIT License.