Skip to content

shalomfriss/sherpa

 
 

Repository files navigation

Demo App for llama.cpp Model

This app is a demo of the llama.cpp model that tries to recreate an offline chatbot, working similar to OpenAI's ChatGPT. The source code for this app is available on GitHub.

Now it works with Vicuna !!!

You can use the latest models on the app.

Works on multiple devices :

Windows, mac and android ! Releases page

The app was developed using Flutter and implements ggerganov/llama.cpp, recompiled to work on mobiles. Please note that Meta officially distributes the LLaMA models, and they will not be provided by the app developers.

To run this app, you need to download the 7B llama model from Meta for research purposes. You can choose the target model (should be a xxx.bin) from the app.

Additionally, you can fine-tune the ouput with preprompts to improve its performance.

Working demo

IMAGE ALT TEXT HERE Click on the image to view the video on YouTube. It shows a OnePlus 7 with 8Gb running Sherpa without speed up.

Usage

To use this app, follow these steps:

  1. Download the ggml-model.bin from Meta for research purposes.
  2. Rename the downloaded file to ggml-model.bin.
  3. Place the file in your device's download folder.
  4. Run the app on your mobile device.

Disclaimer

Please note that the llama.cpp models are owned and officially distributed by Meta. This app only serves as a demo for the model's capabilities and functionality. The developers of this app do not provide the LLaMA models and are not responsible for any issues related to their usage.

About

A mobile Implementation of llama.cpp

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Dart 71.2%
  • C++ 11.4%
  • CMake 9.2%
  • Ruby 3.6%
  • Swift 2.2%
  • HTML 0.9%
  • Other 1.5%