diff --git a/examples/models/llama2/README.md b/examples/models/llama2/README.md index b996e89cce6..7284acfcec6 100644 --- a/examples/models/llama2/README.md +++ b/examples/models/llama2/README.md @@ -111,9 +111,11 @@ You can export and run the original Llama3 8B model. 2. Export model and generate `.pte` file ``` - python -m examples.models.llama2.export_llama --checkpoint -p -d=fp32 -X -qmode 8da4w -kv --use_sdpa_with_kv_cache --output_name="llama3_kv_sdpa_xnn_qe_4_32.pte" group_size 128 --metadata '{"get_bos_id":128000, "get_eos_id":128001}' --embedding-quantize 4,32 + python -m examples.models.llama2.export_llama --checkpoint -p -kv --use_sdpa_with_kv_cache -X -qmode 8da4w --group_size 128 -d fp32 --metadata '{"get_bos_id":128000, "get_eos_id":128001}' --embedding-quantize 4,32 --output_name="llama3_kv_sdpa_xnn_qe_4_32.pte" ``` + Due to the larger vocabulary size of Llama3, we recommend quantizing the embeddings with `--embedding-quantize 4,32` to further reduce the model size. + ## (Optional) Finetuning If you want to finetune your model based on a specific dataset, PyTorch provides [TorchTune](https://github.com/pytorch/torchtune) - a native-Pytorch library for easily authoring, fine-tuning and experimenting with LLMs.