Skip to content

CaptJaybles/SynologyLLM

Repository files navigation

SynologyLLM V2

Local AI to use with for Synology AI and Synology Chat

-Future Goal is to make this a PC app or a docker app, but Help is needed before that will happen

Only tested on Windows 11 with python 3.11, builds on llama-ccp-python

Install

  1. install visual studio community 2022 (I checked python development and C++ developement)

  2. clone repository

  3. create virtual envirement in folder

python -m venv venv
  1. activate virual envirement
venv/Scripts/activate
  1. install the requirements
pip install -r requirements.txt

-added cuda install directly into requirements, change to your prefered backend as needed

Setup

  1. place your LLM in the model folder and copy that file name to the settings file (MODEL_FILENAME="model name.gguf")

  2. setup a new bot in your synology chat app

  3. copy the Token and the incoming URL to the settings file (SYNOCHAT_TOKEN="Token" and SYNOCHAT_WEBHOOK_URL="incoming url")

-Now is a good time to change the settings file defaults from what I used for testing on my lowend laptop
  1. the outgoing URL in synology integration will be http://HOST_IP:HOST_PORT/SynologyLLM change HOST_IP and HOST_PORT to what it is on your local PC your running the model on

  2. Use either synologyLLM.bat file or command

python synologyLLM.py

Features

  1. Loads any llama.cpp model that is supported

  2. To see list of commands

/commands
  1. To Reset your conversation and stored setting
/reset
  1. To change system prompt
/system {new system prompt message}
  1. to change how many chat turns is stored in system prompt
/chat_turns {number of turns}
  1. To toggle thinking if using a thinking model
/think {true|false}
  1. Uses a Chat message queue system

  2. Added tool usage and some sample tools in the tools.py file

-tools included are wiki_tool, ddg_tool, weather_tool, time_tool, news_tool 
  1. Added multiple user capability, it keeps track of all the indivual users settings persistantly

To use with synology ai console

  1. Select OpenAI as the provider

  2. put in the API key used in settings HOST_API_SECRET

-default is sk-xxx
  1. Under advanced settins use this address
- http://Host IP address : host IP port
- change the values to what you are using

About

using local LLMs with Synology Nas

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors