A chat interface for OpenAI and Ollama models featuring chat streaming, local caching, and customisable model values.
OpenAI models utilise an OpenAI developer key which allows you to pay per token.
Check out the demo here
- Code highlighting on input and reponse
- LLAVA model support (vision models)
- Easy to share a model with just a link
- Completely local. All your converations are stored on your browser, not on some server
- Custom model settings
- PWA for lightweight installation on mobile and desktop
Ollama requires you to allow outside connections by setting the OLLAMA_ORIGINS env variable. I've been testing with *
, but setting it to ai.chat.mc.hzuccon.com
or harvmaster.github.io
depending on where you're accessing it from (or if youre self-hosting, your domain should work). For more information see here
- Fix multiple root elements in template (src/components/ChatMessage/ChatmessageChunk.vue)
- Explore continue prompt (src/components/ChatMessage/ChatMessage.vue)
yarn
# or
npm install
The service can be launched in dev
mode and is accessable at http://localhost:9200/#/
quasar dev
In dev
mode, the HMR_PORT
environment variable can be set to allow for Hot_Module_Reloading when the service is sitting behind a sub/domain.
environment:
- HMR_PORT=443
yarn lint
# or
npm run lint
yarn format
# or
npm run format
quasar build