Skip to content

avilum/llama-saas

Repository files navigation

llama-saas

A real-time client and server for LLaMA.

  • 🚀 Runs on any CPU machine, with no need for GPU 🚀
  • The server is written in Go.
  • The client is written in Python using requests with response streaming in real time.

I personally used the smallest 7B/ model on an Intel PC / Macbook Pro, which is ~4.8G when quantized to 4 bit, or ~13G in full precision.

Examples

  • Nice example: elaborate about "Github"

  • Biased example: elaborate about "Donald Trump"

Get LLaMA Pretrained Checkpoints

Note that LLaMA cannot be used for commercial use.

  • To maintain integrity and prevent misuse, we are releasing our model under a noncommercial license focused on research use cases. Access to the model will be granted on a case-by-case basis to academic researchers; those affiliated with organizations in government, civil society, and academia; and industry research laboratories around the world. People interested in applying for access can find the link to the application in our research paper.

Apply for Official Access. You will get a unique download link once you are approved.

How to use

Assuming you have the LLaMA checkpoints (☝️)

  1. Clone and build https://github.com/ggerganov/llama.cpp
  2. Edit the LLAMA_MODEL_PATH and LLAMA_MAIN variables in server.go.
  3. Build and run the server:
go build
./server
  1. Run the client:
python3 -m pip install requests
python3 llama.py

References

  1. https://ai.facebook.com/blog/large-language-model-llama-meta-ai/
  2. https://github.com/ggerganov/llama.cpp

About

A client/server for LLaMA (Large Language Model Meta AI) that can run ANYWHERE.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published