Skip to content

slice-code/ollama-chat-app

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

4 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Chat UI App with el.js & Ollama Integration

Ko-fi

A modern chat interface application with Ollama AI integration, built with el.js - a lightweight, chainable DOM manipulation library. Features WhatsApp-inspired design, persistent conversation history, and optimized context management for fast AI responses.

Chat UI Preview

πŸ“ About

This project demonstrates how to create a production-ready chat application using el.js for DOM manipulation and Ollama for AI-powered conversations. It showcases advanced UI patterns including session management, smart context optimization, dual-layout modes, and WhatsApp-style speech bubbles.

✨ Features

Core Chat Functionality

  • πŸ’¬ Real-time messaging interface with streaming responses
  • πŸ€– Ollama AI Integration with multiple model support
  • ⏱️ Typing indicators with animated dots
  • πŸ• Message timestamps
  • πŸ—¨οΈ WhatsApp-style speech bubbles with triangular pointers

πŸ†• Session Management

  • πŸ“‹ Conversation History Sidebar - Browse all chat sessions
  • πŸ” Session Switching - Click to switch between conversations instantly
  • πŸ—‘οΈ Delete Sessions - Remove unwanted conversations with confirmation
  • βž• New Chat Button - Start fresh conversations without page reload
  • πŸ’Ύ Persistent Storage - All conversations saved to SQLite database
  • 🎯 Clean Start - No pre-selected session on initial load

πŸš€ Performance Optimizations

  • ⚑ Smart Context Management - Summary + recent messages pattern
  • 🧠 Auto-Summarization - AI-powered conversation compaction
  • πŸ“Š Dynamic Context Sizing - Optimal token usage (4000-8192 tokens)
  • 🎯 Token-Aware Building - Stay within model context limits
  • πŸ”„ Auto-Compaction - Triggered at 75% capacity threshold
  • βš™οΈ Model Parameter Tuning - Temperature 0.7, top_p 0.9

πŸ“¦ Data Persistence

  • πŸ’Ώ SQLite Database - Reliable local storage
  • πŸ“‚ Per-Session Isolation - Strict context separation
  • πŸ”„ Auto-Refresh - History updates after first message
  • πŸ—„οΈ Message Metadata - Timestamps, counts, session info

Dual Display Modes

  • πŸ–₯️ Full Mode: Centered large chat window (800px Γ— 600px)
  • πŸ“± Popup Mode: Compact floating widget with toggle button

UI/UX Highlights

  • 🎨 WhatsApp-inspired color scheme (teal header, green accents)
  • 🌈 Fully customizable color themes via configuration
  • ✨ Smooth animations and transitions
  • πŸ“± Responsive design for all screen sizes
  • πŸ”˜ Floating action button for popup mode
  • ❌ Close button in popup header
  • 🎯 Configurable border radius (0px to any value)
  • 🎨 Custom background colors
  • πŸ’« Customizable box shadows
  • πŸ€– Custom bot name and icon (emoji or HTML)

Advanced API Integration

  • πŸ”Œ Built-in onChat callback for API integration
  • ⚑ Async/await support for streaming API calls
  • πŸ”„ Real-time response streaming from Ollama
  • πŸ” Automatic retry mechanism with exponential backoff
  • ⏱️ Configurable typing delay
  • ❌ Error handling with custom messages
  • 🎭 Multiple response types:
    • Plain Text: Simple string responses
    • HTML String: Rich HTML content with interactive elements
    • el.js Object: Custom UI components built with el.js
  • πŸ’‰ Dependency injection pattern for clean code:
    • onChat: async (message, onStream, sendQuickReply) => {}
    • No global variables needed
    • Scoped function access via closure

πŸ€– Ollama Features

  • πŸ“š Model Selection Dropdown - Choose from available Ollama models
  • πŸ”„ Live Model List - Fetched from Ollama API on startup
  • ⚑ Streaming Responses - Real-time text generation display
  • 🧠 Context Awareness - Smart history management per session
  • 🎯 Optimized Prompts - Balanced creativity and coherence

πŸš€ Quick Start

Prerequisites

  1. Install Ollama (if not already installed):

    # Linux/Mac
    curl -fsSL https://ollama.com/install.sh | sh
    
    # Windows - Download from https://ollama.com
  2. Pull a Model (e.g., Llama 3.2):

    ollama pull llama3.2:latest
  3. Start Ollama Server:

    ollama serve

1. Using Local Server (Recommended)

# Install dependencies (if needed)
npm install

# Start the server
node index.js

# Open in browser
http://localhost:3000

2. Direct Browser Opening

Simply open index.html in your browser (limited functionality without backend).

βš™οΈ Configuration

UI Configuration

Customize the chat appearance in index.html (lines 25-45):

const chatConfig = {
  type: 'full',                   // 'full' or 'popup'
  
  // Size configuration
  full: {
    width: '800px',               // Full mode width
    height: '600px'               // Full mode height
  },
  popup: {
    width: '350px',               // Popup mode width
    height: '450px'               // Popup mode height
  },
  
  // Appearance configuration
  background: '#ffffff',          // Background color
  borderRadius: '12px',           // Border radius ('0px' for no radius)
  boxShadow: undefined,           // Custom box shadow (optional)
  botName: 'AI Assistant',        // Bot name displayed in header
  botIcon: 'πŸ€–',                  // Bot icon (emoji or HTML)
  
  // Color configuration
  primaryColor: '#25D366',        // Send button, focus border
  secondaryColor: '#128C7E',      // Gradient partner
  userMessageColor: '#DCF8C6',    // User message bubbles
  botAvatarColor1: '#00A884',     // Bot avatar gradient start
  botAvatarColor2: '#25D366',     // Bot avatar gradient end
  headerGradient1: '#075E54',     // Header gradient start
  headerGradient2: '#128C7E',     // Header gradient end
  
  // Chat callback configuration
  typingDelay: 1000,              // Typing indicator delay (ms)
  retryMessage: "Error message",  // Custom error message
  
  // onChat callback with dependency injection
  onChat: async (message, onStream, sendQuickReply) => {
    // Simulate API delay
    await new Promise(resolve => setTimeout(resolve, 1500));
    
    // βœ… Response Type 1: Plain Text
    return `Response to: ${message}`;
    
    // βœ… Response Type 2: HTML String
    /*
    return `
      <div>
        <p>Halo! Ada yang bisa saya bantu?</p>
        <button onclick="sendQuickReply('Info')">ℹ️ Info</button>
      </div>
    `;
    */
    
    // βœ… Response Type 3: el.js Object (Custom UI)
    /*
    const buttonGroup = el('div')
      .css({'display': 'flex', 'gap': '8px'});
    
    const btnInfo = el('button')
      .text('ℹ️ Info')
      .css({'background': '#25D366', 'color': 'white'})
      .click(() => sendQuickReply('Info produk'));
    
    buttonGroup.child([btnInfo]);
    
    const customUI = el('div').child([
      el('p').text('Halo! Ada yang bisa saya bantu?'),
      buttonGroup
    ]);
    
    return { el: customUI };
    */
    
    // βœ… Streaming Response
    /*
    onStream("Hello");
    onStream(" World!");
    return; // Return undefined when streaming
    */
  }
};

πŸ€– Ollama Configuration

The app automatically integrates with Ollama. Key features:

Model Selection: Dropdown populated from available Ollama models Context Optimization: Automatic smart management (4000-8192 tokens) Session Isolation: Each conversation has separate context Auto-Summarization: AI-powered compaction for long conversations

Performance Parameters (configured in ollama-chat.js):

{
  num_ctx: 4096,          // Context window size (dynamic: 4096-8192)
  temperature: 0.7,       // Creativity vs coherence balance
  top_p: 0.9             // Nucleus sampling quality
}

Context Management Strategy:

  • Recent Messages: Keep last 10 messages in memory
  • Auto-Summary: Triggered at 15+ messages
  • Token Limit: Max 4000 tokens per request
  • Compaction: Auto-compact at 75% capacity

πŸ“ Project Structure

chat-ui/
β”œβ”€β”€ index.html          # Main HTML file with UI configuration
β”œβ”€β”€ index.js            # HTTP server + Ollama API proxy
β”œβ”€β”€ database.js         # SQLite database wrapper
β”œβ”€β”€ el.js              # el.js DOM manipulation library
β”œβ”€β”€ ollama-chat.js     # Ollama integration & session management
└── chat-ui/
    └── chat-ui.js      # Chat app logic (exported function)

data/
└── chat-history.db    # SQLite database for conversation history

πŸ”Œ API Endpoints

Ollama APIs (Proxy)

GET  /api/ollama/tags          # List available models
GET  /api/ollama/ps            # List running models
GET  /api/ollama/version       # Ollama version info
POST /api/ollama/generate      # Generate text completion
POST /api/ollama/chat          # Chat completion (streaming)

Conversation History APIs (SQLite)

GET    /api/conversations                 # List all sessions
GET    /api/conversations?session_id=X    # Get session messages
POST   /api/conversations                 # Add message to session
DELETE /api/conversations/:id             # Delete entire session
GET    /api/stats                         # Get database statistics

Example Usage

Send a Message:

const response = await fetch('/api/conversations', {
  method: 'POST',
  headers: { 'Content-Type': 'application/json' },
  body: JSON.stringify({
    session_id: 'session_123',
    role: 'user',
    content: 'Hello, how are you?'
  })
});

Load Conversation History:

const response = await fetch(`/api/conversations?session_id=session_123`);
const data = await response.json();
console.log(data.history); // Array of messages

Delete a Session:

await fetch('/api/conversations/session_123', {
  method: 'DELETE'
});

πŸ’‘ How It Works

Architecture Overview

The application consists of three layers:

  1. Frontend UI (chat-ui.js, el.js) - User interface and DOM manipulation
  2. Session Management (ollama-chat.js) - Ollama integration, history loading, context optimization
  3. Backend Server (index.js, database.js) - HTTP server, API proxy, SQLite persistence

Chat Flow

// 1. User sends message
User types "Hello" β†’ Click send

// 2. Save to database
POST /api/conversations
  β†’ SQLite stores message
  β†’ Returns message ID

// 3. Send to Ollama (with optimized context)
POST /api/ollama/chat
  Body: {
    model: "llama3.2",
    messages: [
      // Smart context: summary + recent messages
      { role: 'system', content: 'Previous context summary: ...' },
      { role: 'user', content: 'Recent message 1' },
      { role: 'assistant', content: 'Recent response 1' },
      // ... more recent messages
      { role: 'user', content: 'Hello' } // New message
    ],
    stream: true
  }

// 4. Stream response back
Ollama β†’ Backend β†’ Frontend
  Chunk by chunk in real-time

// 5. Save response to database
POST /api/conversations
  β†’ Stores AI response for future context

Session Management

Per-Session Isolation:

// Each session has isolated conversation history
Session A: ["Hello", "Hi there!"]
Session B: ["What is JS?", "JavaScript is..."]

// Switching sessions loads fresh context
Click Session B β†’ 
  Load from DB β†’ 
  Clear memory β†’ 
  Set new context β†’ 
  Ready for chat

Context Optimization (for fast responses):

// Before sending to Ollama:
conversationHistory = loadFromDB(); // e.g., 50 messages

// Build optimized context:
context = buildContext();
  β†’ Summarize old messages (if > 15)
  β†’ Keep recent 10 messages
  β†’ Limit to 4000 tokens max
  β†’ Result: ~12 messages sent to Ollama

// Ollama receives compact, relevant context
// β†’ Faster response times
// β†’ Lower memory usage
// β†’ Better performance

Key el.js Usage Patterns

// Create elements with chaining
const container = el('div')
    .id('chat-app')
    .css({ 
        'max-width': chatType === 'full' ? config.full.width : config.popup.width,
        'height': chatType === 'full' ? config.full.height : config.popup.height
    });

// Add children properly
container.child([header, messagesContainer, inputArea]);

// Call get() to append to DOM
container.get();

// For dynamic content, manipulate DOM directly
messagesContainer.el.appendChild(messageElement.get());

// Event delegation for dynamic lists
chatList.el.addEventListener('click', function(e) {
  const item = e.target.closest('[data-session-id]');
  if (item) {
    // Handle session click
  }
});

⚑ Performance Tips

Optimizing Ollama Response Speed

Context Window Size (biggest impact):

// Smaller context = Faster response
num_ctx: 2048   β†’ Very fast ⚑⚑
num_ctx: 4096   β†’ Balanced ⚑
num_ctx: 8192   β†’ Slower but more context 🐌

// Our implementation uses dynamic sizing:
const optimalContextSize = Math.max(4096, Math.min(totalTokens + 1024, 8192));

Model Selection:

llama3.2:1b   β†’ Fastest ⚑⚑⚑ (but less capable)
llama3.2:3b   β†’ Fast ⚑⚑ (good balance)
llama3.2:7b   β†’ Medium ⚑ (more accurate)
llama3.2:70b  β†’ Slowest 🐒 (most capable)

Temperature (no speed impact, only creativity):

temperature: 0.3  β†’ Focused, deterministic
temperature: 0.7  β†’ Balanced (our default)
temperature: 1.0  β†’ Creative, varied

Token Count (direct correlation to speed):

// Fewer tokens = Faster
Short prompt (100 tokens)  β†’ Fast ⚑
Long essay (2000 tokens)   β†’ Slow 🐌

// Our optimizations:
- Auto-summarization reduces old messages
- Recent window limits to 10 messages
- Token-aware building stops at limit

🎨 Color Themes

Pre-configured themes available in comments:

  • 🟦 Blue Theme: Cyan/blue gradient scheme
  • 🟩 Green Theme: Mint/green fresh look
  • 🟧 Orange Theme: Warm pink/orange combination

πŸ› οΈ Troubleshooting

Ollama Connection Issues

Error: "Cannot connect to Ollama"

# Check if Ollama is running
ollama serve

# Verify models are installed
ollama list

# Test direct connection
curl http://localhost:11434/api/tags

Model Not Found

Error: "model llama3.2 not found"

# Pull the model
ollama pull llama3.2:latest

Database Issues

Error: "Failed to save message"

# Ensure data directory exists
mkdir -p ./data

# Check file permissions
ls -la ./data/

Port Conflicts

Error: "Port 3000 already in use"

# Find process using port 3000
lsof -i :3000

# Kill the process
kill -9 <PID>

# Or change port in index.js
const PORT = 3001;

β˜• Support

If you find this project helpful, consider buying me a coffee!

Support on Ko-fi

πŸ“„ License

MIT License

πŸ‘¨β€πŸ’» Author

Created with ❀️ using el.js

About

ollama chat app with el.js for ui interface and nodejs as api

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors