From f8e28af85a6d549c1baa1b3dd918ab3519fa4923 Mon Sep 17 00:00:00 2001 From: Svetlana Karslioglu Date: Mon, 23 Sep 2024 13:12:15 -0700 Subject: [PATCH 1/2] Fix image sizes in README.md --- examples/demo-apps/android/LlamaDemo/README.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/examples/demo-apps/android/LlamaDemo/README.md b/examples/demo-apps/android/LlamaDemo/README.md index 8bb04fd03a1..cfb66538269 100644 --- a/examples/demo-apps/android/LlamaDemo/README.md +++ b/examples/demo-apps/android/LlamaDemo/README.md @@ -46,7 +46,7 @@ Below are the UI features for the app. Select the settings widget to get started with picking a model, its parameters and any prompts.

- +

@@ -55,7 +55,7 @@ Select the settings widget to get started with picking a model, its parameters a Once you've selected the model, tokenizer, and model type you are ready to click on "Load Model" to have the app load the model and go back to the main Chat activity.

- +

@@ -86,12 +86,12 @@ int loadResult = mModule.load(); ### User Prompt Once model is successfully loaded then enter any prompt and click the send (i.e. generate) button to send it to the model.

- +

You can provide it more follow-up questions as well.

- +

#### ExecuTorch App API @@ -108,14 +108,14 @@ mModule.generate(prompt,sequence_length, MainActivity.this); For LLaVA-1.5 implementation, select the exported LLaVA .pte and tokenizer file in the Settings menu and load the model. After this you can send an image from your gallery or take a live picture along with a text prompt to the model.

- +

### Output Generated To show completion of the follow-up question, here is the complete detailed response from the model.

- +

#### ExecuTorch App API From a889fd22c2da1733ad48db8f825816caa316aa9d Mon Sep 17 00:00:00 2001 From: Svetlana Karslioglu Date: Mon, 23 Sep 2024 13:20:11 -0700 Subject: [PATCH 2/2] Update --- .../android/LlamaDemo/docs/delegates/mediatek_README.md | 4 ++-- .../android/LlamaDemo/docs/delegates/qualcomm_README.md | 2 +- .../android/LlamaDemo/docs/delegates/xnnpack_README.md | 2 +- examples/demo-apps/apple_ios/LLaMA/README.md | 8 ++++---- .../apple_ios/LLaMA/docs/delegates/mps_README.md | 6 +++--- .../apple_ios/LLaMA/docs/delegates/xnnpack_README.md | 8 ++++---- 6 files changed, 15 insertions(+), 15 deletions(-) diff --git a/examples/demo-apps/android/LlamaDemo/docs/delegates/mediatek_README.md b/examples/demo-apps/android/LlamaDemo/docs/delegates/mediatek_README.md index 24be86f07a5..e820055ecfd 100644 --- a/examples/demo-apps/android/LlamaDemo/docs/delegates/mediatek_README.md +++ b/examples/demo-apps/android/LlamaDemo/docs/delegates/mediatek_README.md @@ -112,7 +112,7 @@ Before continuing forward, make sure to modify the tokenizer, token embedding, a Prior to deploying the files on device, make sure to modify the tokenizer, token embedding, and model file names in examples/mediatek/executor_runner/run_llama3_sample.sh reflect what was generated during the Export Llama Model step.

- +

In addition, create a sample_prompt.txt file with a prompt. This will be deployed to the device in the next step. @@ -150,7 +150,7 @@ adb shell ```

- +

## Reporting Issues diff --git a/examples/demo-apps/android/LlamaDemo/docs/delegates/qualcomm_README.md b/examples/demo-apps/android/LlamaDemo/docs/delegates/qualcomm_README.md index 304cfc5d39b..c3f2a27e835 100644 --- a/examples/demo-apps/android/LlamaDemo/docs/delegates/qualcomm_README.md +++ b/examples/demo-apps/android/LlamaDemo/docs/delegates/qualcomm_README.md @@ -221,7 +221,7 @@ popd If the app successfully run on your device, you should see something like below:

- +

## Reporting Issues diff --git a/examples/demo-apps/android/LlamaDemo/docs/delegates/xnnpack_README.md b/examples/demo-apps/android/LlamaDemo/docs/delegates/xnnpack_README.md index ee6f89db5b9..cb6193942ae 100644 --- a/examples/demo-apps/android/LlamaDemo/docs/delegates/xnnpack_README.md +++ b/examples/demo-apps/android/LlamaDemo/docs/delegates/xnnpack_README.md @@ -149,7 +149,7 @@ popd If the app successfully run on your device, you should see something like below:

- +

## Reporting Issues diff --git a/examples/demo-apps/apple_ios/LLaMA/README.md b/examples/demo-apps/apple_ios/LLaMA/README.md index b6644f1dfb9..293963b6c2a 100644 --- a/examples/demo-apps/apple_ios/LLaMA/README.md +++ b/examples/demo-apps/apple_ios/LLaMA/README.md @@ -59,11 +59,11 @@ For more details integrating and Running ExecuTorch on Apple Platforms, checkout * Ensure that the ExecuTorch package dependencies are installed correctly, then select which ExecuTorch framework should link against which target.

-iOS LLaMA App Swift PM +iOS LLaMA App Swift PM

-iOS LLaMA App Choosing package +iOS LLaMA App Choosing package

* Run the app. This builds and launches the app on the phone. @@ -83,13 +83,13 @@ For more details integrating and Running ExecuTorch on Apple Platforms, checkout If the app successfully run on your device, you should see something like below:

-iOS LLaMA App +iOS LLaMA App

For Llava 1.5 models, you can select and image (via image/camera selector button) before typing prompt and send button.

-iOS LLaMA App +iOS LLaMA App

## Reporting Issues diff --git a/examples/demo-apps/apple_ios/LLaMA/docs/delegates/mps_README.md b/examples/demo-apps/apple_ios/LLaMA/docs/delegates/mps_README.md index 19b14ae4c30..20ee73b821f 100644 --- a/examples/demo-apps/apple_ios/LLaMA/docs/delegates/mps_README.md +++ b/examples/demo-apps/apple_ios/LLaMA/docs/delegates/mps_README.md @@ -95,19 +95,19 @@ Note: To access logs, link against the Debug build of the ExecuTorch runtime, i. For more details integrating and Running ExecuTorch on Apple Platforms, checkout this [link](https://pytorch.org/executorch/main/apple-runtime.html).

-iOS LLaMA App Swift PM +iOS LLaMA App Swift PM

Then select which ExecuTorch framework should link against which target.

-iOS LLaMA App Choosing package +iOS LLaMA App Choosing package

Click “Run” to build the app and run in on your iPhone. If the app successfully run on your device, you should see something like below:

-iOS LLaMA App mps +iOS LLaMA App mps

## Reporting Issues diff --git a/examples/demo-apps/apple_ios/LLaMA/docs/delegates/xnnpack_README.md b/examples/demo-apps/apple_ios/LLaMA/docs/delegates/xnnpack_README.md index fbe709742dd..6497449ba25 100644 --- a/examples/demo-apps/apple_ios/LLaMA/docs/delegates/xnnpack_README.md +++ b/examples/demo-apps/apple_ios/LLaMA/docs/delegates/xnnpack_README.md @@ -95,25 +95,25 @@ Note: To access logs, link against the Debug build of the ExecuTorch runtime, i. For more details integrating and Running ExecuTorch on Apple Platforms, checkout this [link](https://pytorch.org/executorch/main/apple-runtime.html).

-iOS LLaMA App Swift PM +iOS LLaMA App Swift PM

Then select which ExecuTorch framework should link against which target.

-iOS LLaMA App Choosing package +iOS LLaMA App Choosing package

Click “Run” to build the app and run in on your iPhone. If the app successfully run on your device, you should see something like below:

-iOS LLaMA App +iOS LLaMA App

For Llava 1.5 models, you can select and image (via image/camera selector button) before typing prompt and send button.

-iOS LLaMA App +iOS LLaMA App

## Reporting Issues