-
Notifications
You must be signed in to change notification settings - Fork 13.3k
server : host-memory prompt caching #16391
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
+809
−467
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
0787f03
to
5c0cec4
Compare
This comment was marked as spam.
This comment was marked as spam.
5c0cec4
to
1440ec5
Compare
3 tasks
9de8392
to
cf7dd4b
Compare
65e8991
to
264d2c3
Compare
I've been testing this with Claude Code and Codex and haven't spotted any issues. After a few more rounds of testing today, planning to merge. |
* server : add option to debug the slot contents * Update tools/server/server.cpp --------- Co-authored-by: Xuan-Son Nguyen <son@huggingface.co>
anyshu
pushed a commit
to anyshu/llama.cpp
that referenced
this pull request
Oct 10, 2025
* master: (113 commits) webui: updated the chat service to only include max_tokens in the req… (ggml-org#16489) cpu : optimize the ggml NORM operation (ggml-org#15953) server : host-memory prompt caching (ggml-org#16391) No markdown in cot (ggml-org#16483) model-conversion : add support for SentenceTransformers (ggml-org#16387) ci: add ARM64 Kleidiai build and test support (ggml-org#16462) CANN: Improve ACL graph matching (ggml-org#16166) kleidiai: kernel interface refactoring (ggml-org#16460) [SYCL] refactor soft_max, add soft_max_back (ggml-org#16472) model: EmbeddingGemma Adding Support for SentenceTransformers Dense Modules (ggml-org#16367) refactor: centralize CoT parsing in backend for streaming mode (ggml-org#16394) Disable CUDA host buffers on integrated GPUs (ggml-org#16308) server : fix cancel pending task (ggml-org#16467) metal : mark FA blocks (ggml-org#16372) server : improve context checkpoint logic (ggml-org#16440) ggml webgpu: profiling, CI updates, reworking of command submission (ggml-org#16452) llama : support LiquidAI LFM2-MoE hybrid model (ggml-org#16464) server : add `/v1/health` endpoint (ggml-org#16461) webui : added download action (ggml-org#13552) (ggml-org#16282) presets : fix pooling param for embedding models (ggml-org#16455) ...
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
target #16440
rel #16117
Initial version of automatic memory offloading to host memory using an extended logic for minimizing the prompt reprocessing. The host-memory prompt cache acts as "extra slots" with which we can calculate prefix similarity and decide to hot-swap them into the
llama_context
if it would reduce the processing. The cache is stored in regular RAM.The RAM size that is used for caching prompts has 2 limits:
--cache-ram, -cram
CLI arg)--context-size
)The server logs provide detailed prompt cache information each time the cache is updated:
A small QoL improvement is that
update_slots()
now also logs the old and new prompt for each task aroundn_past
(up to 10 tokens) so we can have a better understanding what caused the particular choice of then_past
value for the new task.Setting
LLAMA_SERVER_SLOTS_DEBUG=1
env will make the/slots
endpoint output a more detailed output containing the prompt and the generated text of the current or last task. This is useful for debugging purposes.Note: mtmd workarounds are starting to cause some headaches. For example
server_tokens
is not copyable which complicates the cache logic and makes the prompt caching feature incompatible with mtmd.Usage
Server refactor
server_slot
members with a singleserver_task
server_slot.n_predict
slot.task
is nowconst ptr
to reflect that the task parameters should not change when it is passed to the slotTODOs