Skip to content

Use locally running LLMs directly from Siri 🦙🟣

Notifications You must be signed in to change notification settings

h2oai/SiriLLama

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 

Repository files navigation


Siri LLama

Siri LLama is apple shortcut that access locally running LLMs through Siri or the shortcut UI on any apple device connected to the same network of your host machine. It uses Langchain and Ollama

Demo Video🎬

Installation

  1. Install Ollama for your machine, you have to run ollama serve in the terminal to start the server
  2. Install Langchain and Flask & run the flask app
pip install --upgrade --quiet  langchain langchain-openai
pip install flask
  1. Download the shortcut from here

  2. Enter your local IP address and the Flask port in the shortcut

  3. Run the shortcut through Siri or the shortcut UI

Note

SiriLLama is the most simple langchain chatbot, there's huge room for improvement and customization, including model selection (even through OpenAI, Anthropic API), RAG Applications or better LLM Memory options, etc.

Common Issues

  • Even we access the flask app (not Ollama server directly), Some windows users who have Ollama installed via WSL have to make sure ollama servere is exposed to the network, Check this issue for more details

About

Use locally running LLMs directly from Siri 🦙🟣

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%