Skip to content

mrdjohnson/llm-x

Repository files navigation

Deployed to github pages

LLM X

LLM X logo

Privacy statement:

LLM X does not make any external api calls. (go ahead, check your network tab and see the Fetch section). Your chats and image generations are 100% private. This site / app works completely offline.

Bugs!

Ollama + Firefox: LLM X uses ollama js to update models and show model information, currently there is a cors issue on firefox when using the app from github that does not allow updating models or seeing model information to work. Advised to use chrome for these or use the CLI until fixed. Apologies. Github issue

Recent additions:

  • Text generation through lm studio is here!
  • Regenerating a bot message adds it to a message variation list
  • Message headers and footers are sticky with the message, useful for long messages

How To Use:

Prerequisites for application

  • Ollama: Download and install Ollama
    • Pull down a model (or a few) from the library Ex: ollama pull llava (or use the app)
  • Lm Studio: Download and install Lm Studio
  • AUTOMATIC1111: Git clone AUTOMATIC1111 (for image generation)

How to use web client (no install):

Prerequisites for web client

  • Ollama Options:
    • Use Ollama's FAQ to set OLLAMA_ORIGINS = *.github.io
    • Run this in your terminal OLLAMA_ORIGINS=*.github.io ollama serve
      • (Powershell users: $env:OLLAMA_ORIGINS="https://%2A.github.io/"; ollama serve)
  • Lm Studio:
    • Run this in your terminal: lms server start --cors=true
  • A1111:
    • Run this in the a1111 project folder: ./webui.sh --api --listen --cors-allow-origins "*"

  • Use your browser to go to LLM-X
  • Go offline! (optional)
  • Start chatting!

How to use offline:

  • Follow instructions for "How to use web client"
  • In your browser search bar, there should be a download/install button, press that.
  • Go offline! (optional)
  • Start chatting!

How to use from project:

Prerequisites for local project

  • Ollama: Run this in your terminal ollama serve
  • Lm Studio: Run this in your terminal: lms server start
  • A1111: Run this in the a1111 project folder: ./webui.sh --api --listen

  • Pull down this project; yarn install, yarn dev
  • Go offline! (optional)
  • Start chatting!

Goals / Features

  • Lm Studio integration!
  • Text to Image generation through AUTOMATIC1111
  • Image to Text using Ollama's multi modal abilities
  • Offline Support via PWA technology
  • Code highlighting with Highlight.js (only handles common languages for now)
  • Search/Command bar provides quick access to app features through kbar
  • Text Entry and Response to Ollama
  • Auto saved Chat history
  • Manage multiple chats
  • Copy/Edit/Detele messages sent or recieved
  • Re-write user message (triggering response refresh)
  • System Prompt customization through "Personas" feature
  • Theme changing through DaisyUI
  • Chat image Modal previews through Yet another react lightbox
  • Import / Export chat(s)
  • Continuous Deployment! Merging to the master branch triggers a new github page build/deploy automatically

Screenshots:

Conversation about logo
Logo convo screenshot
Image generation example!
Image generation screenshot
Showing off omnibar and code
Omnibar and code screenshot
Showing off code and light theme
Code and light theme screenshot
Responding about a cat
Cat screenshot
Another logo response
Logo 2 screenshot

What is this? ChatGPT style UI for the niche group of folks who run Ollama (think of this like an offline chat gpt server) locally. Supports sending and receiving images and text! WORKS OFFLINE through PWA (Progressive Web App) standards (its not dead!)

Why do this? I have been interested in LLM UI for a while now and this seemed like a good intro application. I've been introduced to a lot of modern technologies thanks to this project as well, its been fun!

Why so many buzz words? I couldn't help but bee cool 😎

Tech Stack (thank you's):

Logic helpers:

UI Helpers:

Project setup helpers:

Inspiration: ollama-ui's project. Which allows users to connect to ollama via a web app

Perplexity.ai Perpexlity has some amazing UI advancements in the LLM UI space and I have been very interested in getting to that point. Hopefully this starter project lets me get closer to doing something similar!

Getting started

(please note the minimum engine requirements in the package json)

Clone the project, and run yarn in the root directory

yarn dev starts a local instance and opens up a browser tab under https:// (for PWA reasons)

MISC

  • LangChain.js was attempted while spiking on this app but unfortunately it was not set up correctly for stopping incoming streams, I hope this gets fixed later in the future OR if possible a custom LLM Agent can be utilized in order to use LangChain

    • edit: Langchain is working and added to the app now!
  • Originally I used create-react-app 👴 while making this project without knowing it is no longer maintained, I am now using Vite. 🤞 This already allows me to use libs like ollama-js that I could not use before. Will be testing more with langchain very soon

  • This readme was written with https://stackedit.io/app

  • Changes to the main branch trigger an immediate deploy to https://mrdjohnson.github.io/llm-x/