From 6b959c6404b645cb20c833272358b4a5e3e03b8a Mon Sep 17 00:00:00 2001 From: Mengtao Yuan Date: Wed, 24 Apr 2024 09:39:19 -0700 Subject: [PATCH] Update Llava README.md Simplify the instruction. --- examples/models/llava_encoder/README.md | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/examples/models/llava_encoder/README.md b/examples/models/llava_encoder/README.md index a074fa61332..76224e41454 100644 --- a/examples/models/llava_encoder/README.md +++ b/examples/models/llava_encoder/README.md @@ -5,10 +5,7 @@ In this example, we initiate the process of running multi modality through Execu ## Instructions Note that this folder does not host the pretrained LLava model. -- To have Llava available, follow the [Install instructions](https://github.com/haotian-liu/LLaVA?tab=readme-ov-file#install) in the LLava github. Follow the licence in the specific repo when using L -- Since the pytorch model version may not be updated, `cd executorch`, run `./install_requirements.sh`. -- If there is numpy compatibility issue, run `pip install bitsandbytes -I`. -- Alternatively, run `examples/models/llava_encoder/install_requirements.sh`, to replace the steps above. +- Run `examples/models/llava_encoder/install_requirements.sh`. - Run `python3 -m examples.portable.scripts.export --model_name="llava_encoder"`. The llava_encoder.pte file will be generated. - Run `./cmake-out/executor_runner --model_path ./llava_encoder.pte` to verify the exported model with ExecuTorch runtime with portable kernels. Note that the portable kernels are not performance optimized. Please refer to other examples like those in llama2 folder for optimization.