This repository contains an initial implementation of the LLM OS proposed by Karpathy. He talks about it in this tweet, this tweet and this video.
- LLMs are the kernel process of an emerging operating system.
- This process (LLM) will solve problems by coordinating other resources (like memory or computation tools).
- The LLM OS Vision:
- It can read/generate text
- It has more knowledge than any single human about all subjects
- It can browse the internet
- It can use existing software infra (calculator, python, mouse/keyboard)
- It can see and generate images and video
- It can hear and speak, and generate music
- It can think for a long time using a system 2
- It can “self-improve” in domains
- It can be customized and fine-tuned for specific tasks
- It can communicate with other LLMs
[x] indicates functionality that is implemented in the LLM OS app
Note: Fork and clone this repository if needed
mkdir llm_os_env
cd llm_os_env
conda create --prefix ./env python=3.11 # choose python version
conda activate ./env
pip install -r requirements.txt
- Out initial implementation uses GPT-4o, so export your OpenAI API Key
export OPENAI_API_KEY=***
- To use Exa for research, export your EXA_API_KEY (get it from here)
export EXA_API_KEY=xxx
We use PgVector to provide long-term memory and knowledge to the LLM OS.
Please install docker desktop and run PgVector using either the helper script or the docker run
command.
- Run using a helper script
./pgvector/run_pgvector.sh
- OR run using the docker run command
docker run -d \
-e POSTGRES_DB=ai \
-e POSTGRES_USER=ai \
-e POSTGRES_PASSWORD=ai \
-e PGDATA=/var/lib/postgresql/data/pgdata \
-v pgvolume:/var/lib/postgresql/data \
-p 5532:5432 \
--name pgvector \
phidata/pgvector:16
streamlit run app.py
- Open localhost:8501 to view your LLM OS.