diff --git a/examples/qualcomm/oss_scripts/llama/README.md b/examples/qualcomm/oss_scripts/llama/README.md
index 79c20180d69..439278cb424 100644
--- a/examples/qualcomm/oss_scripts/llama/README.md
+++ b/examples/qualcomm/oss_scripts/llama/README.md
@@ -5,9 +5,10 @@ This file provides you the instructions to run LLAMA model with different parame
1. LLAMA2 Stories 110M
2. LLAMA3.2 1B
3. LLAMA3.2 3B (WIP)
+
We offer the following modes to execute the model:
-Prefill Mode: This is also known as batch prefill mode, where the model takes in a list of tokens as input and generates the next token along with the key-value (KV) cache for all tokens. This mode is efficient for generating the initial sequence of tokens (usually the user's prompt).
+Prefill Mode: This is also known as batch prefill mode, where the model takes in a list of tokens as input and generates the next token along with the key-value (KV) cache for all tokens. This mode is efficient for encoding the user's prompt.
KV Cache Mode: In KV Cache mode, the model takes in a single previous token and generates the next predicted token along with its KV cache. It is efficient for generating subsequent tokens after the initial prompt.
@@ -41,7 +42,7 @@ python -m extension.llm.tokenizer.tokenizer -t tokenizer.model -o tokenizer.bin
echo '{"dim": 768, "multiple_of": 32, "n_heads": 12, "n_layers": 12, "norm_eps": 1e-05, "vocab_size": 32000}' > params.json
```
-#### LLAMA3.2
+#### LLAMA3.2
Follow the [instructions](https://www.llama.com/) to download models.
At the end of this step, users should have the following files ready: `consolidated.00.pth`, `params.json`, and `tokenizer.model`.
@@ -58,6 +59,53 @@ Default example using hybrid mode.
python examples/qualcomm/oss_scripts/llama/llama.py -b build-android -s ${SERIAL_NUM} -m ${SOC_MODEL} --ptq 16a4w --checkpoint consolidated.00.pth --params params.json --tokenizer_model tokenizer.model --llama_model llama3_2 --model_mode hybrid --prefill_seq_len 32 --kv_seq_len 128 --prompt "what is 1+1"
```
+### KV Cache update mechanism
+We have two distinct mechanisms for updating the key-value (KV) cache, which can be selected at runtime. Shift Pointer and Smart Mask.
+
+#### Shift Pointer mechanism
+
+
num_head
size of (head_dim + 1) * (seq_len - 1)
. After a single inference, the new key cache is copied from the key output pointer k_out
and appended to the key cache. Subsequently, the buffer start pointer of the key cache k_in
moves to the next token, making the previous position of the buffer start pointer unused. This process is repeated for each subsequent inference step.
+ For the value cache update process, we first allocate a contiguous memory of size (num_head + 1) * head_dim * (seq_len - 1)
for each layer, with the last head reserved for I/O shifting, After the first inference, the cache is updated by simply shifting the pointers of all heads to the next token position, making only the previous head_dim * 1
section of the buffer start pointer v_in
of the first head unused. This process is repeated for each subsequent inference step.
+
k_in
/v_in
of the cache, the Smart Mask mechanism updates only the new token at the specified position. This approach eliminates the need to adjust the buffer start pointer. This mechanism is beneficial for shared buffers but requires CPU memory copying.
Mechanism | +Time Complexity | +Space Complexity | +||
---|---|---|---|---|
+ | K | +V | +K | +V | +
Shift Pointer | +num_head * head_dim | +1 | +num_head * (head_dim + 1) * seq_len | +(num_head + 1) * head_dim * (seq_len - 1) | +
Smart Mask | +num_head * head_dim | +num_head * head_dim | +num_head * seq_len * head_dim | +num_head * seq_len * head_dim | +