Skip to content

simonw/llm-llamafile

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

llm-llamafile

PyPI Changelog Tests License

Access llamafile localhost models via LLM

Installation

Install this plugin in the same environment as LLM.

llm install llm-llamafile

Usage

Make sure you have a llamafile running on localhost, serving an OpenAI compatible API endpoint on port 8080.

You can then use llm to interact with that model like so:

llm -m llamafile "3 neat characteristics of a pelican"

Development

To set up this plugin locally, first checkout the code. Then create a new virtual environment:

cd llm-llamafile
python3 -m venv venv
source venv/bin/activate

Now install the dependencies and test dependencies:

llm install -e '.[test]'

To run the tests:

pytest

About

Access llamafile localhost models via LLM

Resources

License

Stars

Watchers

Forks

Sponsor this project

 

Packages

No packages published

Languages