Skip to content

Sollimann/chatty-llama

Repository files navigation

Chatty Llama

A fullstack chat app utilizing Llama LLMs

minimum rustc 1.60 Maintenance GitHub pull-requests GitHub pull-requests closed ViewCount License: MIT

How to run

1. Install huggingface-cli

$ make install-huggingface-cli

2. Export huggingface token

Create a huggingface token: https://huggingface.co/settings/tokens

, then set the token as env variable on your machine:

$ export HF_TOKEN=<your-token-here>

3. Download the Llama-2-7B-Chat-GGML model

$ make download-model

4. Run the chat app

$ make chatty-llama

PS! If you're having issues connecting to the backend, try running make chatty-llama-host instead.

In your browser, open http://localhost:80

Enjoy!