Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature]: Expose implementation details of the KV Cache #776

Open
abhiaagarwal opened this issue Jun 1, 2024 · 6 comments
Open

[Feature]: Expose implementation details of the KV Cache #776

abhiaagarwal opened this issue Jun 1, 2024 · 6 comments

Comments

@abhiaagarwal
Copy link
Contributor

abhiaagarwal commented Jun 1, 2024

Background & Description

Since I'm using LLamaSharp on the edge, initial prompt evaluation is high fixed cost. Since my prompts are static (for the most part), I'd like to pre-initialize the KV cache and have end users consume it as a gguf file, trading compute for storage. Based on my read of the code base, there is no specific abstraction for the llama.cpp KV store.

This probably ties into #684.

I'm interested in working on this, just marking the concept down to avoid any conflicts / idea-planning.

API & Usage

KV caches are specific to a model? Maybe there should be a whole separate class that deals with the abstraction of the kv cache. llama.cpp already has APIs designed to programmatically manipulate it.

How to implement

Expose llama_kv_cache_* implementations in a high-level manner.

@martindevans
Copy link
Member

Are you sure you need KV cache access? If you're just wanting to pre-process a prompt and re-use it then it sounds to me more like you want to save/load states. The high level executors (Instruct/Interact Executor) expose methods to save a file which can be used to load that conversation back up later, I believe that should contain the saved KV cache for that sequence.

There are various ways to access the KV cache exposed in a few different places.

The "raw" low level API is in NativeApi starting at line 285:

  • llama_get_kv_cache_token_count
  • llama_get_kv_cache_used_cells
  • llama_kv_cache_clear
  • llama_kv_cache_seq_rm
  • llama_kv_cache_seq_cp
  • llama_kv_cache_seq_keep
  • llama_kv_cache_seq_add
  • llama_kv_cache_seq_div
  • llama_kv_cache_seq_pos_max

The SafeLLamaContextHandle exposes wrappers around these starting at line 566. You should prefer to use these wrappers over the raw APIs, they're intended to expose all of the power of the lower level but with extra safety where possible (e.g. a pointer and a length parameter would be replaced with a Span in these wrappers).

If you're using the BatchedExecutor (which is a in-development "low level" executor, more difficult to sue than the other executors but more powerful) then each Conversation object exposes a KV accessor, which can be used to manipulate the KV cache for that sequences. You can see that in use here.

@abhiaagarwal
Copy link
Contributor Author

Yep, I've thought about using those APIs, but I believe my use case is a bit more specific. The prompts themselves aren't static, but parts of the prompt are. For example, with RAG, the prompt might be:

You are a helpful assistant. Answer the following question using the following pieces of information:

# Context

{c1}

{c2}

{c3}

{c1}, {c2}, {c3}, etc. aren't static but I'm confident the rest of the prompt is. I've benchmarked and the initial prompt evaluation is a big fixed cost.

I've also thought about using the APIs exposed in SafeLLamaContextHandle, but based on the documentation in the llama.cpp header file, those are only for debugging and only provide a view on the underlying cache.

I haven't tested this yet, but I'm not sure the SaveState method is portable? In addition, I think it probably includes stuff I'm not interested in, and would like to minimize the size of the file. I guess in general, I'm interested in an analogous equivalent to the llama.cpp --prompt-cache CLI argument.

@abhiaagarwal
Copy link
Contributor Author

Actually, I dug a little bit through the llama.cpp code base, and it seems that all the prompt-cache option does is call the llama_state_save_file, which is already the function you've exposed. So that's good, I was mistaken. That being said, it would be nice to have a higher-level API for manipulation of the KV cache outside the SafeLLamaContextHandle.

@abhiaagarwal
Copy link
Contributor Author

For an overall API design, Llama-cpp-python exposes a LLamaState and LLamaCache construct, the former of which is represented in the Llama.cpp internals and the former is a construct that a high-level construct without an analogous inquiry. Interestingly, it seems LLamaCache and LlamaState are actually present in the docs here, but based on a code search, they don't actually exist?

@martindevans
Copy link
Member

martindevans commented Jun 1, 2024

{c1}, {c2}, {c3}, etc. aren't static but I'm confident the rest of the prompt is. I've benchmarked and the initial prompt evaluation is a big fixed cost.

Yep so my suggestion was to evaluate everything before {c1}, save that, and then you can resume from there later on.

but based on the documentation in the llama.cpp header file, those are only for debugging and only provide a view on the underlying cache.

Some of the functions here are specifically for debugging only, but not all of them (it will be mentioned in the comments). The debugging ones are exposed in a higher level wrapper through LLamaKvCacheView.

e.g. llama_get_kv_cache_token_count is for debugging but llama_kv_cache_seq_rm is definitely not!

For an overall API design, Llama-cpp-python exposes a LLamaState and LLamaCache

I can't find any docs on LLamaCache, do you have a link to any? From a look at the implementation here, it looks like it automatically loads and saves states (using llama_state_save_file presumably) so you can resume a sequence later using the same cache?

@abhiaagarwal
Copy link
Contributor Author

Yeah, I just read through the code, it's not really documented. Here's where it's actually used though. https://github.com/abetlen/llama-cpp-python/blob/165b4dc6c188f8fda2fc616154e111f710484eba/llama_cpp/llama.py#L1073C1-L1089C1. It seems LLamaCache is basically just a thin wrapper over LlamaState, where it handles continuous exporting of the state?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants