A web interface built with Nuxt for interacting with any OLlama language model locally.
- Docker
- Docker Compose
- Node.js (for local development)
- pnpm
- oLlama (for local development)
- Nuxt.js 3
- Nuxt UI 3
- Tailwind CSS v4
- TailwindCSS Motion
- Marked
- Create a
.env
file in the root directory:
cp .env.example .env
- Configure your model in the
.env
file:
# Model Configuration
NUXT_PUBLIC_LLAMA_MODEL="deepseek-r1:1.5b"
LLAMA_MODEL="deepseek-r1:1.5b"
Note: Both environment variables should typically use the same model name.
- Clone the repository:
git clone <repository-url>
cd iLlama
- Start the application using Docker Compose:
docker compose up -d
- Stop the application using Docker Compose:
docker compose down
The application will be available at http://localhost:3000
- Install dependencies and start dev server:
pnpm install
pnpm dev
-
Install oLlama
-
Pull and run your desired model:
ollama run deepseek-r1:1.5b # Replace 'deepseek-r1:1.5b' with your actual model name
The development server will be available at http://localhost:3000
You can use any model supported by oLlama. Some popular options include:
- deepseek
- codellama
- mistral
- llama2
The following models were recently tested in a containerized environment and ran smoothly:
deepseek-r1:1.5b
llama3.2:1b
qwen2.5:0.5b
The model name in your .env
file must match exactly with the model name from oLlama's library.
For example: deepseek-r1:1.5b
not just deepseek
.
Check oLlama's model library for more options.
To view installed model(s):
echo 'Installed models: ' && docker run --rm -v illama_ollama_data:/data alpine ls /data/models/manifests/registry.ollama.ai/library
NUXT_PUBLIC_LLAMA_MODEL
: Model name used by the frontendLLAMA_MODEL
: Model name for Docker container to pull
Both variables should typically match and use the exact model name from oLlama's library.