Skip to content
/ iLlama Public

A UI built with Nuxt for interacting with any Ollama models locally.

License

Notifications You must be signed in to change notification settings

wazeerc/iLlama

Repository files navigation

iLlama

A web interface built with Nuxt for interacting with any OLlama language model locally.

Prerequisites

  • Docker
  • Docker Compose
  • Node.js (for local development)
  • pnpm
  • oLlama (for local development)

Technologies & Dependencies

Environment Setup

  1. Create a .env file in the root directory:
cp .env.example .env
  1. Configure your model in the .env file:
# Model Configuration
NUXT_PUBLIC_LLAMA_MODEL="deepseek-r1:1.5b"
LLAMA_MODEL="deepseek-r1:1.5b"

Note: Both environment variables should typically use the same model name.

Docker Setup

  1. Clone the repository:
git clone <repository-url>
cd iLlama
  1. Start the application using Docker Compose:
docker compose up -d
  1. Stop the application using Docker Compose:
docker compose down

The application will be available at http://localhost:3000

Local Development Setup

  1. Install dependencies and start dev server:
pnpm install
pnpm dev
  1. Install oLlama

  2. Pull and run your desired model:

ollama run deepseek-r1:1.5b  # Replace 'deepseek-r1:1.5b' with your actual model name

The development server will be available at http://localhost:3000

Available Models

You can use any model supported by oLlama. Some popular options include:

  • deepseek
  • codellama
  • mistral
  • llama2

The following models were recently tested in a containerized environment and ran smoothly:

  • deepseek-r1:1.5b
  • llama3.2:1b
  • qwen2.5:0.5b

The model name in your .env file must match exactly with the model name from oLlama's library. For example: deepseek-r1:1.5b not just deepseek.

Check oLlama's model library for more options.

View Installed Models

To view installed model(s):

echo 'Installed models: ' && docker run --rm -v illama_ollama_data:/data alpine ls /data/models/manifests/registry.ollama.ai/library

Environment Variables

  • NUXT_PUBLIC_LLAMA_MODEL: Model name used by the frontend
  • LLAMA_MODEL: Model name for Docker container to pull

Both variables should typically match and use the exact model name from oLlama's library.

About

A UI built with Nuxt for interacting with any Ollama models locally.

Topics

Resources

License

Stars

Watchers

Forks