Skip to content

masasron/zik-gpt4all

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

zik-gpt4all

A simple server for streaming GPT4ALL models outputs using server-sent events.

GPT4All-J

If you want to use the GPT4All-J model, follow these steps:

  1. Download the raw model from here or from the GPT4ALL Repository.
  2. Clone and build the ggml repository using the following commands:
git clone https://github.com/masasron/ggml
cd ggml
mkdir build && cd build
cmake ..
make -j4 gpt-j
  1. Start the zik-gpt4all server using the following command:
git clone https://github.com/masasron/zik-gpt4all.git
cd zik-gpt4all
npm install
MODEL_PATH={YOUR_MODEL_PATH} MODEL_EXE_PATH={../ggml/build/bin/gpt-j} node src/ggml-server.js

The server should be running on port 3001. Update the server URL on the Zik settings page and give it a try.

GPT4ALL (LLaMa)

If you want to use the LLaMa based GPT4ALL model, make sure it is working on your local machine before running the server. Follow the instructions provided in the GPT4ALL Repository.

Once you have the LLaMa based GPT4ALL model ready, start the zik-gpt4all server using the following command:

git clone https://github.com/masasron/zik-gpt4all.git
cd zik-gpt4all
npm install
MODEL_PATH={YOUR_MODEL_PATH} MODEL_EXE_PATH={../ggml/build/bin/gpt-j} node src/server.js

The server should be running on port 3001. Update the server URL on the Zik settings page and give it a try.

About

A server for GPT4ALL with server-sent events support

Topics

Resources

License

Stars

Watchers

Forks