llamacpp but wrapped in python
This allows serving llama using libraries such as fastAPI using the optimized and in particular quantized models of the llama.cpp ecosystem instead of using torch directly. This should decrease ressource consumption over plain torch.
The library has been pushed to pypi, you should be able to install it using
pip install llamacpypy
Please write an issue if something doesn't work.
Docs can be found on read the docs.
There is also a basic usage demo at the bottom of the readme.
Atm this is all very raw so it will require some work on the users part.
git clone https://github.com/seemanne/llamacpypy.git
cd llamacpypy
git submodule update --init
If you have poetry, there are artifacts in the pyproject file that should allow you to do poetry install
to set up venv, however it wont install the project itself. This can be done by using poetry shell
and then calling pip install ./
as below.
If anyone want to fix the build process to make it less cumbersome, I would be very happy.
If you have another setup just pip install the reqs in your virtual env of choice and then continue as described below.
This isn't actually required, but it will give compile errors if something is wrong.
make -j
pip install ./
Initialize the model instance:
from llamacpypy import Llama
llama = Llama('models/7B/ggml-model-q4_0.bin', warm_start=False)
Load your model into memory:
llama.load_model()
Generate from a given prompt:
var = llama.generate("This is the weather report, we are reporting a clown fiesta happening at backer street. The clowns ")
print(var)
>>> This is the weather report, we are reporting a clown fiesta happening at backer street. The clowns 1st of July parade was going to be in their own neighborhood but they just couldn't contain themselves;
They decided it would look better and probably have more fun if all went into one area which meant that the whole town had to shut down for a little while as all roads were blocked. At least traffic wasn’t too bad today because most of people are out shopping, but I did see some shoppers in their car driving away from Backer street with “clowns” on wheels outside their windows…
The kids lined up along the route and waited for the parade to pass by
This python module is mainly a wrapper around the llama
class in src/inference.cpp
. As such, any changes should be done in there.
As the llamacpp code is mostly contained in main.cpp
which doesn't expose a good api, this repo will have to be manually patched on a need-be basis. Changes to ggml
should not be a problem.