Skip to content

welniak/llama-explain

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

About

llama explain is a Chrome extension that explains complex text online in simple terms, by using a local-running LLM. All is done locally on your machine. No data is sent to OpenAI's, or any other company's, server. No need to pay for any service.

It's built on top of Ollama and as such, currently runs only on MacOS.

explain

pick_model

Prerequisites

When using the installation script:

When installing manually:

You need to make sure that Ollama is running for the extension to work. In case the extension does not show any models in the model selection view, run the following command:

ollama serve

Set up

You can install the extension directly from the store (coming soon) or load the unpacked extension in your browser (the extension directory).

The rest of the setup can be done either by using the setup.sh script or manually.

Using the script (recommended)

The script will install Ollama for you using Homebrew, create appropriate models based on Lllama2, and export the environment variable that the extension needs to communicate with Ollama.

To run the script:

  1. Navigate to the setup directory
  2. Make the script executable, by running
    chmod +x setup.sh
  3. Run the script:
    source setup.sh

In case you run into the operation not permitted error while trying to execute the steps above, run

xattr -d com.apple.quarantine setup.sh

from inside of the setup directory.

Manual

If you opt for not using the setup script, you can perform all the steps manually:

  1. Install Ollama

  2. Set the OLLAMA_ORIGINS environment variable to "chrome-extension://*". This is needed to allow the extension to communicate with Ollama

  3. Create the Ollama models based on the provided Modelfiles. Run:

    ollama create llama-explain-llama2:13b -f ../modelfile/llama-explain-llama2-13b-modelfile
    ollama create llama-explain-llama2:7b -f ../modelfile/llama-explain-llama2-7b-modelfile

    from inside of the modelfile directory.

    Note that only a single model is required by the extension. By default, it comes with two Modelfiles: one based on Llama2 7b, and the other based on Llama2 13b. You can pick just one of the two, or create yet another Ollama model for the task. If you opt for a custom model, make sure that the name has a llama-explain prefix.