Skip to content

ryan-utopia/LFX-Mentorship-Pre-test-for-WasmEdge

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 

Repository files navigation

LFX Mentorship Pre-test for #3168 & #3170

00 Enveriment

  • MacBook Air m1 8+256
  • macOS Sonoma 14.2.1
  • Terminal : iTerm
  • Python 3.10.9
  • GNU Make 3.81
  • cmake version 3.28.0
  • Homebrew 4.1.14

01 Framework Execution

Applicants must demonstrate proficiency in building and executing backend frameworks. You are required to share screenshots and a brief documentation detailing your build and execution process for examples from these frameworks. You can pick any example to demonstrate the execution.

1.1 mlx

1.1.1 Installation

follow the guide here.

  • Python Installation
pip install mlx

image-20240219142905087

Build in C++

MLX must be built and installed from source

git clone git@github.com:ml-explore/mlx.git mlx && cd mlx
mkdir -p build && cd build
cmake .. && make -j
make test
make install

image-20240219163841957

1.1.2 mlx whisper example

Get the working directory

git clone https://github.com/ml-explore/mlx-examples.git
cd mlx-examples/whisper

Set up

pip install -r requirements.txt
brew install ffmpeg

Convert the model to MLX format

python convert.py --torch-name-or-path tiny --mlx-path mlx_models/tiny

image-20240219150841061

Convert audio to text

import whisper
output = whisper.transcribe("/Users/ryan/Downloads/audio.mp3", word_timestamps=True)
print(output["segments"][0]["words"])

image-20240219153745524

import whisper
output = whisper.transcribe("/Users/ryan/Downloads/audio.mp3", word_timestamps=True)
print(output["segments"][0]["words"])

image-20240219154017020

clone the repo

git clone https://github.com/ggerganov/whisper.cpp.git

download and convert to ggml

bash ./models/download-ggml-model.sh base.en

image-20240219164508736

built and test

make

./main -f samples/jfk.wav

image-20240219164543896

image-20240219171752449

02 Using wasmedge

using hydai/0.13.5_ggml_lts branch

2.1 build the llama.cpp plugin

Follow this guide to build the llama.cpp plugin and execute it with this chat example or this API server example.

Install hydai/0.13.5_ggml_lts
git clone https://github.com/WasmEdge/WasmEdge.git -b hydai/0.13.5_ggml_lts
cd WasmEdge
brew install grpc
brew install llvm
brew install cmake
export LLVM_DIR=/opt/homebrew/opt/llvm/lib/cmake
#for Apple Silicon Model
cmake -GNinja -Bbuild -DCMAKE_BUILD_TYPE=Release \
  -DWASMEDGE_PLUGIN_WASI_NN_BACKEND="GGML" \
  -DWASMEDGE_PLUGIN_WASI_NN_GGML_LLAMA_METAL=ON \
  -DWASMEDGE_PLUGIN_WASI_NN_GGML_LLAMA_BLAS=OFF \
  .

image-20240219215118521

cmake --build build

image-20240219215233685

cmake --install build

image-20240219215325306

2.2 run a specific model

Download the model
curl -LO https://huggingface.co/second-state/Llama-2-7B-Chat-GGUF/resolve/main/Llama-2-7b-chat-hf-Q5_K_M.gguf

Chat with the model on the CLI

curl -LO https://github.com/second-state/LlamaEdge/releases/latest/download/llama-chat.wasm

wasmedge --dir .:. --nn-preload default:GGML:AUTO:Llama-2-7b-chat-hf-Q5_K_M.gguf llama-chat.wasm -p llama-2-chat

image-20240220103303146

API Test
curl -LO https://github.com/second-state/LlamaEdge/releases/latest/download/llama-api-server.wasm
curl -LO https://github.com/second-state/chatbot-ui/releases/latest/download/chatbot-ui.tar.gz
tar xzf chatbot-ui.tar.gz
rm chatbot-ui.tar.gz

wasmedge --dir .:. --nn-preload default:GGML:AUTO:Llama-2-7b-chat-hf-Q5_K_M.gguf llama-api-server.wasm -p llama-2-chat

image-20240220104255726

Chat with the model via a web UI
curl -LO https://github.com/second-state/chatbot-ui/releases/latest/download/chatbot-ui.tar.gz
tar xzf chatbot-ui.tar.gz
wasmedge --dir .:. --nn-preload default:GGML:AUTO:llama-2-7b-chat-hf-Q5_K_M.gguf llama-api-server.wasm -p llama-2-chat

image-20240220110030682

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published