Skip to content

hackjutsu/mini-chatbot

Repository files navigation

Mini Chatbot logo

✏️ From Vibing to Engineering: A Case Study of Building a Character Chatbot

Mini is a character chatbot powered by your favorite local LLMs. It uses a React frontend, an Express.js backend, and an Ollama-based LLM facade. As its name suggests, it supports a minimal feature set designed to be extended for more complex use cases later on

  • character marketplace
  • conversation session
  • message sharing
  • user management

Create Character Manage Characters
Create Character Manage Characters
Pick a character to start chatting Share Message
Pick a character Share Message

Landing Page

A static marketing page that highlights the Mini experience lives under docs/index.html so it can be published directly to GitHub Pages. It reuses the screenshots above plus client/public branding assets to describe personas, chat sessions, and the local-first stack.

Prerequisites

  • Node.js 18+
  • Ollama with ollama serve running locally or brew services start ollama to run it in the background
  • LLM models pulled locally, e.g. ollama pull qwen2.5

Getting Started

  1. Install dependencies:

    npm install
    npm --prefix client install
  2. Build the React frontend so the Express server can serve it:

    npm --prefix client run build
  3. Ensure the Ollama daemon is running and the desired model is available:

    ollama serve
    # or to run in the background
    brew services start ollama
  4. Start the web server:

    npm start # to run the service
    
    LOG_LEVEL=debug npm run dev # for local development
  5. Visit http://localhost:3000. On first load you’ll be prompted to choose a username (a handle is stored locally). From there you can create/manage sessions and start chatting.

Local development tips

  • Use npm --prefix client run dev for a hot-reloading React dev server (it proxies API calls to localhost:3000). Run npm start separately so the backend is available.
  • Running npm --prefix client run build again will refresh the production bundle that Express serves.
  • LOG_LEVEL=debug npm run dev is handy when you want to rebuild the client and restart the server without typing two commands.

Linting & Testing

Backend

# to lint the backend code
npm run lint
# to lint both backend and frontend
npm run lint:all

# to run tests
npm test

Frontend

# to lint the frontend code
npm run lint:client
# to lint both backend and frontend
npm run lint:all

# to run tests
npm --prefix client run test
# or
npm run test:client

Environment Variables

Name Default Description
PORT 3000 Port for the Express server.
OLLAMA_CHAT_URL http://localhost:11434/api/chat Endpoint for the local Ollama chat API.
OLLAMA_MODEL qwen2.5 Model identifier passed to Ollama.

Features & Flow

  • Persistent chat history – Every message is stored centrally (SQLite + better-sqlite3) under a user/session, so conversations survive refreshes and can resume on any device that knows the user handle.
  • Multiple sessions per user – Users can spin up as many chats as they want, rename them, and delete them; each session history is loaded on demand via REST endpoints.
  • Character marketplace – Each user can create and publish characters (name + background + avatar). Sessions can be associated with a specific persona (or none), and the persona prompt is injected automatically so the assistant replies in-character.
  • Streaming proxy to Ollama/api/chat rebuilds the prompt from stored history, streams NDJSON deltas from Ollama to the browser, and aborts upstream work if the client disconnects.

Feel free to extend this further (e.g., auth, RAG, summarization, rate limiting).

License

MIT

About

🎬 A mini character chatbot for local LLMs

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published