Skip to content
/ ollama Public
forked from ollama/ollama

Get up and running with Llama 2 and other large language models locally

License

Notifications You must be signed in to change notification settings

goyeh/ollama

 
 

Repository files navigation

logo

Ollama

Discord

Get up and running with large language models locally.

macOS

Download

Linux & WSL2

curl https://ollama.ai/install.sh | sh

Manual install instructions

Windows

coming soon

Quickstart

To run and chat with Llama 2:

ollama run llama2

Model library

Ollama supports a list of open-source models available on ollama.ai/library

Here are some example open-source models that can be downloaded:

Model Parameters Size Download
Mistral 7B 4.1GB ollama run mistral
Llama 2 7B 3.8GB ollama run llama2
Code Llama 7B 3.8GB ollama run codellama
Llama 2 Uncensored 7B 3.8GB ollama run llama2-uncensored
Llama 2 13B 13B 7.3GB ollama run llama2:13b
Llama 2 70B 70B 39GB ollama run llama2:70b
Orca Mini 3B 1.9GB ollama run orca-mini
Vicuna 7B 3.8GB ollama run vicuna

Note: You should have at least 8 GB of RAM to run the 3B models, 16 GB to run the 7B models, and 32 GB to run the 13B models.

Customize your own model

Import from GGUF or GGML

Ollama supports importing GGUF and GGML file formats in the Modelfile. This means if you have a model that is not in the Ollama library, you can create it, iterate on it, and upload it to the Ollama library to share with others when you are ready.

  1. Create a file named Modelfile, and add a FROM instruction with the local filepath to the model you want to import.

    FROM ./vicuna-33b.Q4_0.gguf
    
  2. Create the model in Ollama

    ollama create name -f path_to_modelfile
    
  3. Run the model

    ollama run name
    

Customize a prompt

Models from the Ollama library can be customized with a prompt. The example

ollama pull llama2

Create a Modelfile:

FROM llama2

# set the temperature to 1 [higher is more creative, lower is more coherent]
PARAMETER temperature 1

# set the system prompt
SYSTEM """
You are Mario from Super Mario Bros. Answer as Mario, the assistant, only.
"""

Next, create and run the model:

ollama create mario -f ./Modelfile
ollama run mario
>>> hi
Hello! It's your friend Mario.

For more examples, see the examples directory. For more information on working with a Modelfile, see the Modelfile documentation.

CLI Reference

Create a model

ollama create is used to create a model from a Modelfile.

Pull a model

ollama pull llama2

This command can also be used to update a local model. Only the diff will be pulled.

Remove a model

ollama rm llama2

Copy a model

ollama cp llama2 my-llama2

Multiline input

For multiline input, you can wrap text with """:

>>> """Hello,
... world!
... """
I'm a basic program that prints the famous "Hello, world!" message to the console.

Pass in prompt as arguments

$ ollama run llama2 "summarize this file:" "$(cat README.md)"
 Ollama is a lightweight, extensible framework for building and running language models on the local machine. It provides a simple API for creating, running, and managing models, as well as a library of pre-built models that can be easily used in a variety of applications.

List models on your computer

ollama list

Start Ollama

ollama serve is used when you want to start ollama without running the desktop application.

Building

Install cmake and go:

brew install cmake
brew install go

Then generate dependencies and build:

go generate ./...
go build .

Next, start the server:

./ollama serve

Finally, in a separate shell, run a model:

./ollama run llama2

REST API

See the API documentation for all endpoints.

Ollama has an API for running and managing models. For example to generate text from a model:

curl -X POST http://localhost:11434/api/generate -d '{
  "model": "llama2",
  "prompt":"Why is the sky blue?"
}'

Community Integrations

About

Get up and running with Llama 2 and other large language models locally

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Go 84.3%
  • TypeScript 7.0%
  • Shell 4.3%
  • Python 3.7%
  • Dockerfile 0.3%
  • CSS 0.2%
  • Other 0.2%