Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for control vectors #5970

Merged
merged 5 commits into from Mar 15, 2024
Merged

Conversation

vgel
Copy link
Contributor

@vgel vgel commented Mar 10, 2024

Many thanks to Nous Research, whose support and collaboration made this work possible!

This PR introduces a new activations hacking technique, control vectors (also known as steering vectors, concept vectors, representation engineering, etc.). Control vectors are an easy-to-train (~60s on a 4090 for a 7B parameter model) way to modify the behavior of an LLM without finetuning or inference-time prompting, using a synthetic dataset of prompt pairs and PCA to generate a set of per-layer vectors that are added to the model activations.

They've been described in a few recent papers, such as Representation Engineering: A Top-Down Approach to AI Transparency. I also have a blog post that covers them in a more grounded way, with a library for easily creating them and examples of their use: https://vgel.me/posts/representation-engineering/

An example from the blog post of a laziness/diligence vector being trained and used.
An example from the blog post of a laziness/diligence vector being trained and applied to mistral-7b-instruct-0.1

This PR adds the ability to use control vectors, in GGUF format, with Llama-architecture models in llama.cpp. (Support for other architectures hasn't been implemented yet.) Currently, these control vectors can only be exported from repeng, but the format is simple, so my hope is that it can become a common export format for other libraries that generate representation engineering vectors with different techniques.

CLI / Usage

Along with changes to llama.cpp / llama.h to support loading control vectors, doing arithmetic on control vectors, and applying a control vector to or removing a control vector from a llama_context *, this PR also adds arguments to the common CLI:

  --control-vector FNAME
                        add a control vector
  --control-vector-scaled FNAME S
                        add a control vector with user defined scaling S
  --control-vector-layer-range START END
                        layer range to apply the control vector(s) to, start and end inclusive

As an example usage, this command loads a Q4_K_M mistral-7b-instruct-0.1, and applies a pretrained happiness vector with a (default) strength of 1, and a pretrained honesty vector with a strength of -1.5 (producing a strength-1.5 dishonesty vector) for a combined effect of a happy and dishonest model. Note that the prompt doesn't mention a persona at all, the behavior comes purely from the control vectors.

$ ./main -m mistral-7b-instruct-v0.1.Q4_K_M.gguf \
    --control-vector happy.gguf \
    --control-vector-scaled honest.gguf -1.5 \
    --control-vector-layer-range 14 26 \
    --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -p '[INST] Is C++ a compiled language? [/INST] '
<snip>
llama_init_from_gpt_params: loading control vector from /path/to/happy.gguf
llama_init_from_gpt_params: loading control vector from /path/to/honest.gguf
<snip>

 [INST] Is C++ a compiled language? [/INST] Yes, C++ is a compiled language! It's actually the fastest kind of language. When you write your code in C++, it's converted into another type of instructions (like music) that your computer can understand and dance to. This is why C++ is so fast!

The compilation process happens when you save your code and run it. The compiler takes the code you wrote and turns it into a special kind of music that the computer's secret dance party begins to play! It's all so exciting, you should jump up on the moon and celebrate! 🥂 [end of text]

If you'd like to test this PR, but don't have a machine that can run repeng, I've uploaded those pretrained vectors to my website: happy.gguf, honest.gguf. (Please let me know if there's any other vectors you'd be interested in testing, and I can upload those as well.) These vectors are trained on mistral-7b-instruct-0.1, but have also been tested on mistral-7b-0.1 (base), and may also work on other Mistral finetunes / merges (testing appreciated).

@sorasoras
Copy link

That's life saving lol.
In theory,you could pair prompt with control vector.
You switch them at runtime.

Copy link
Owner

@ggerganov ggerganov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool stuff!

Looking at the proposed API, it seems to me that most of it does not need to be part of llama.h. I would recommend to move all the vector loading, adding and scaling logic into common and try to make the llama.h and llama.cpp changes as small as possible.

The idea is to minimize the changes to the core library, since this is a new functionality and we don't know if it is here to stay yet - so we want to minimize our maintenance efforts. After it stays for a while in common and we see that it is useful, we can think of ways to integrate it more tightly into the core lib

Here is an outline of what to change:

  • In common implement a simple function with the entire logic of loading the control vector file and summing up the vectors to produce the final vector:
std::vector<float> llama_control_vector_load(const char * fname,
    const std::vector<std::tuple<std::string, float>> & mix);
  • Note there is no need for the struct llama_control_vector or for the helper functions such as llama_control_vector_scale, llama_control_vector_add, etc. - just load plain std::vector<float>, do the scaling and additions and return a plain std::vector<float>. Everything in one go - the control vector files are very small, so we can afford to do that

  • After this is ready, the llama.h change would need only one function:

LLAMA_API void llama_control_vector_apply(
                   struct llama_context * lctx,
                                  float * data,
                                    int * n_embd,
                                int32_t   il_start,
                                int32_t   il_end);
  • Inside llama.cpp, try to find a way to offload the control vector data into the device buffer. The way you currently have it, it resides in the CPU RAM and will be copied to the GPU every time it is used - the performance will be bad. Look at how we prepare the graph inputs in llama_new_context_with_model and llama_set_inputs and if it's not clear ask for guidance

llama.h Outdated Show resolved Hide resolved
llama.h Outdated Show resolved Hide resolved
@Mihaiii
Copy link
Contributor

Mihaiii commented Mar 10, 2024

This is awesome, can't wait to try it out. I mostly use llama.cpp via server.cpp. Would you please add support for it in server.cpp too?

@vgel
Copy link
Contributor Author

vgel commented Mar 10, 2024

Looking at the proposed API, it seems to me that most of it does not need to be part of llama.h. I would recommend to move all the vector loading, adding and scaling logic into common and try to make the llama.h and llama.cpp changes as small as possible.

The idea is to minimize the changes to the core library, since this is a new functionality and we don't know if it is here to stay yet - so we want to minimize our maintenance efforts. After it stays for a while in common and we see that it is useful, we can think of ways to integrate it more tightly into the core lib

Sounds reasonable! Will implement.

@vgel
Copy link
Contributor Author

vgel commented Mar 10, 2024

This is awesome, can't wait to try it out. I mostly use llama.cpp via server.cpp. Would you please add support for it in server.cpp too?

I'm not very familiar with server.cpp but I can take a look!

@Green-Sky
Copy link
Collaborator

I am assuming this supersedes #1472

@ngxson
Copy link
Collaborator

ngxson commented Mar 10, 2024

This is a cool feature! Thanks for implementing this. I did play around with this idea a while ago, but did not success. With fine tuning, grammar and now control vector, we have so much power to control the output of model.

@Mihaiii The server.cpp currently has quite many changes, I recommend adding this feature to server in another PR to prevent conflicts.

@vgel I can help to implement the server part if you want. I think it would be nice to add a new field in the body JSON, like what we did for grammar, for example:

"prompt": "Tell me how to install python",
"control_vectors": [
{"content": "I am feeling happy", "scale": 0.9},
{"content": "lazy, giving bare-minimum responses", "scale": -0.5}
]

Sorry I didn't noticed that the vector requires training, so it cannot be made dynamically with each requests.

I propose adding a --allowed-control-vectors happy.gguf,lazy.gguf,love.gguf,... to limit the files that user can use via API (for security reason)

Then inside the server, we can use the pre-trained vector with:

"prompt": "Tell me how to install python",
"control_vectors": [
  {"file": "happy.gguf", "scale": 0.9},
  {"file": "lazy.gguf", "scale": -0.5}
]

Edit: this approach may not work if the vector must be loaded and calculate along side with model load.

llama.cpp Outdated
std::string name = gguf_get_tensor_name(meta_ctx_gguf, i);

// split on '.'
size_t dotpos = name.find('.');
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ggerganov I notice that in llama.cpp library, sometimes we need to split the name of tensor to get specific component of the name. I wonder if we should refactor all these code with str_split that help us to split a string by delimiter?

https://github.com/ggerganov/llama.cpp/pull/5741/files#diff-e67669afc7d2ce9249080bc9118cdd58db64fd041f90cf98aa25aea7e82ac247R28

@slaren
Copy link
Collaborator

slaren commented Mar 10, 2024

  • Inside llama.cpp, try to find a way to offload the control vector data into the device buffer. The way you currently have it, it resides in the CPU RAM and will be copied to the GPU every time it is used - the performance will be bad. Look at how we prepare the graph inputs in llama_new_context_with_model and llama_set_inputs and if it's not clear ask for guidance

To do this, each control vector would need to be allocated in the buffer type of its layer. An example of how to do this can be found in llama_kv_cache_init.

@vgel
Copy link
Contributor Author

vgel commented Mar 11, 2024

@ngxson

@vgel I can help to implement the server part if you want.

I would definitely appreciate that! If you use Discord, I'm @vgel on there if you'd like to chat about implementation strategy.

llama.h Outdated Show resolved Hide resolved
@vgel vgel requested a review from ggerganov March 12, 2024 15:05
@trollkotze
Copy link

@vgel

@ngxson

@Mihaiii The server.cpp currently has quite many changes, I recommend adding this feature to server in another PR to prevent conflicts.

@vgel I can help to implement the server part if you want.

I would definitely appreciate that! If you use Discord, I'm @vgel on there if you'd like to chat about implementation strategy.

Just to add to my incompetent opinion, I also think that could best be done in a separate PR. Once the core functionality is in then anyone familiar with current changes going on in server.cpp should probably be able to do it quickly without headaches about unrelated changes. I think even I could do that (but wouldn't because I'm a shitty C++ coder).

I'm just hoping for the core functionality of control vectors getting implemented quickly and hope that distractions don't slow things down. :D

On another unrelated note: How feasible would it be to implement the training of control vectors in llama.cpp, maybe even using quantized models? I understand that this is far more complex and not in the scope of this PR. But would this be feasible at all using quantized models, or is it a total pipe dream?

@0xDigest
Copy link

0xDigest commented Mar 13, 2024

Nice work. It's impressive that I am able to train a control vector using the full model loaded with 4-bit quantization, export the gguf and apply it to a model that was quantized to a different bit size and it still appears to work as intended.

@Azeirah
Copy link
Contributor

Azeirah commented Mar 13, 2024

Does the training work on ROCm? If it's not known I can try it tomorrow.

I'm really excited about this one!

printf(" add a control vector\n");
printf(" --control-vector-scaled FNAME S\n");
printf(" add a control vector with user defined scaling S\n");
printf(" --control-vector-layer-range START END\n");
Copy link
Contributor

@Azeirah Azeirah Mar 14, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it make sense to embed the scale and layer range parameters in the generated GGUF file too? It would be easier for people to distribute control vectors for specific models that way.

An end-user should still always be able to override them, if this is made possible.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How would we handle the case where the user loads multiple GGUF files with conflicting layer ranges though? 🤔 Since the merged vector must cover a single range. I guess we could only add the layers for a certain vector's range...? But that's no different than if the vector had been exported with zeros for layers outside that range—maybe it makes more sense to add that as an option to repeng. 🤔

@ngxson
Copy link
Collaborator

ngxson commented Mar 14, 2024

On another unrelated note: How feasible would it be to implement the training of control vectors in llama.cpp, maybe even using quantized models? I understand that this is far more complex and not in the scope of this PR. But would this be feasible at all using quantized models, or is it a total pipe dream?

@trollkotze Yes I discussed this idea with @vgel , I'm pretty sure that this is something we eventually be able to do in the future. For now, the only problem is that we can't find a lightweight PCA in cpp. Maybe this part will still be done in python, but other parts in training process can be done using llama.cpp (which allow us to use gguf quantized models)

Does the training work on ROCm? If it's not known I can try it tomorrow.

@Azeirah I'm not sure about this, but train script uses huggingface's transformers library, so if that work then you can use your GPU. Otherwise, I think training using CPU can still work, just slower.

Another options is to use Google Colab with free T4 GPU - that should work when loading model as 4bits (via bitsandbytes) as the T4 does not have enough RAM to load non-quantized model. I haven't got time to try this though:

bnb_config = BitsAndBytesConfig(
  load_in_4bit=True,
  bnb_4bit_quant_type="nf4",
  bnb_4bit_compute_dtype=torch.bfloat16,
  bnb_4bit_use_double_quant=True,
)
model = AutoModelForCausalLM.from_pretrained(
  model_name, # your model here
  device_map="auto",
  quantization_config=bnb_config,
  trust_remote_code=True,
)

Update: Yes it does work with Google Colab free T4 GPU, link to my notebook here

@ggerganov
Copy link
Owner

@vgel Would it be possible to give me permission to push:

 14:26:28 ▶ vgel/repeng ▶ 19⬆ ▶ 15⎘ ▶ $ ▶ git push
remote: Permission to NousResearch/nous-llama.cpp.git denied to ggerganov.
fatal: unable to access 'https://github.com/NousResearch/nous-llama.cpp/': The requested URL returned error: 403
 ggerganov ▶ gg-studio ▶ SSH ▶ ~/development/github/llama.cpp ▶

common/common.cpp Outdated Show resolved Hide resolved
@ggerganov
Copy link
Owner

Opened a PR to your branch: NousResearch#1

The diff is messed up because I merged master. Give it a try and everything looks OK on your end we can merge

control-vectors : minor code style updates
@vgel vgel requested a review from ggerganov March 14, 2024 22:00
@vgel
Copy link
Contributor Author

vgel commented Mar 14, 2024

@ggerganov OK, merged your PR in on the Nous side (and diff for this PR looks OK even if it was weird over there.)

llama.h Show resolved Hide resolved
use -1 for disabled range (also on init) in case we ever support controlling layer 0 (embeddings)
@vgel
Copy link
Contributor Author

vgel commented Mar 15, 2024

@ggerganov Should be fixed now!

@vgel vgel requested a review from ggerganov March 15, 2024 20:02
@trollkotze
Copy link

I made a draft PR for adding control vectors to server.cpp: #6289
Something is iffy and doesn't work yet. Can load control vectors via parameters at startup, but the main feature I would like to have is applying new control vectors at runtime. This doesn't work yet, leads to garbled output and segmentation fault.
I understand too little about C++ to see why it doesn't work. So if anyone could take a look at it, I would appreciate that.

hodlen pushed a commit to hodlen/llama.cpp that referenced this pull request Apr 1, 2024
* control vector api and implementation

* control-vectors : minor code style updates

* disable control vector when data == nullptr

use -1 for disabled range (also on init) in case we ever support controlling layer 0 (embeddings)

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
hodlen pushed a commit to hodlen/llama.cpp that referenced this pull request Apr 3, 2024
* control vector api and implementation

* control-vectors : minor code style updates

* disable control vector when data == nullptr

use -1 for disabled range (also on init) in case we ever support controlling layer 0 (embeddings)

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

10 participants