From 0e6cd8678e6faca77416534a95517845736e368c Mon Sep 17 00:00:00 2001 From: Hansong Zhang Date: Thu, 19 Sep 2024 12:36:27 -0700 Subject: [PATCH 1/2] Update GH link in docs (#5493) Summary: Should use the raw link instead of GH web link Pull Request resolved: https://github.com/pytorch/executorch/pull/5493 Reviewed By: shoumikhin Differential Revision: D63040432 Pulled By: kirklandsign fbshipit-source-id: f6b8f1ec4fe2d7ac1c5f25cc1c727279a9d20065 (cherry picked from commit 16673f964912169261bfbaa46f7147a24766cc9b) --- examples/demo-apps/android/LlamaDemo/README.md | 12 ++++++------ examples/demo-apps/apple_ios/LLaMA/README.md | 15 ++++++++++++--- 2 files changed, 18 insertions(+), 9 deletions(-) diff --git a/examples/demo-apps/android/LlamaDemo/README.md b/examples/demo-apps/android/LlamaDemo/README.md index 1d4dc6d5769..6a1c594a245 100644 --- a/examples/demo-apps/android/LlamaDemo/README.md +++ b/examples/demo-apps/android/LlamaDemo/README.md @@ -46,7 +46,7 @@ Below are the UI features for the app. Select the settings widget to get started with picking a model, its parameters and any prompts.

- +

@@ -55,7 +55,7 @@ Select the settings widget to get started with picking a model, its parameters a Once you've selected the model, tokenizer, and model type you are ready to click on "Load Model" to have the app load the model and go back to the main Chat activity.

- +

@@ -87,12 +87,12 @@ int loadResult = mModule.load(); ### User Prompt Once model is successfully loaded then enter any prompt and click the send (i.e. generate) button to send it to the model.

- +

You can provide it more follow-up questions as well.

- +

> [!TIP] @@ -109,14 +109,14 @@ mModule.generate(prompt,sequence_length, MainActivity.this); For LLaVA-1.5 implementation, select the exported LLaVA .pte and tokenizer file in the Settings menu and load the model. After this you can send an image from your gallery or take a live picture along with a text prompt to the model.

- +

### Output Generated To show completion of the follow-up question, here is the complete detailed response from the model.

- +

> [!TIP] diff --git a/examples/demo-apps/apple_ios/LLaMA/README.md b/examples/demo-apps/apple_ios/LLaMA/README.md index 7e9fc59339e..6d6a589cf24 100644 --- a/examples/demo-apps/apple_ios/LLaMA/README.md +++ b/examples/demo-apps/apple_ios/LLaMA/README.md @@ -39,7 +39,16 @@ Xcode will download and cache the package on the first run, which will take some ``` * Open XCode and select "Open an existing project" to open `examples/demo-apps/apple_ios/LLama`. -* Ensure that the ExecuTorch package dependencies are installed correctly. +* Ensure that the ExecuTorch package dependencies are installed correctly, then select which ExecuTorch framework should link against which target. + +

+iOS LLaMA App Swift PM +

+ +

+iOS LLaMA App Choosing package +

+ * Run the app. This builds and launches the app on the phone. * In app UI pick a model and tokenizer to use, type a prompt and tap the arrow buton @@ -58,13 +67,13 @@ Xcode will download and cache the package on the first run, which will take some If the app successfully run on your device, you should see something like below:

-iOS LLaMA App +iOS LLaMA App

For Llava 1.5 models, you can select and image (via image/camera selector button) before typing prompt and send button.

-iOS LLaMA App +iOS LLaMA App

## Reporting Issues From 23b62261276055bad79467d7a9a7e0da3d5049e7 Mon Sep 17 00:00:00 2001 From: Hansong Zhang Date: Fri, 20 Sep 2024 13:39:33 -0700 Subject: [PATCH 2/2] Fix link --- examples/demo-apps/android/LlamaDemo/README.md | 4 ---- 1 file changed, 4 deletions(-) diff --git a/examples/demo-apps/android/LlamaDemo/README.md b/examples/demo-apps/android/LlamaDemo/README.md index 3d8c9367102..41b030cef06 100644 --- a/examples/demo-apps/android/LlamaDemo/README.md +++ b/examples/demo-apps/android/LlamaDemo/README.md @@ -46,11 +46,7 @@ Below are the UI features for the app. Select the settings widget to get started with picking a model, its parameters and any prompts.

-<<<<<<< HEAD -======= - ->>>>>>> origin/release/0.4