-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issues when switching between multiple adapters LoRAs #1802
Comments
|
This should not happen. Could you please share some code to reproduce this error? If you're on the latest PEFT version, you can also run |
|
Hey, could you please paste the code as text, otherwise I'd have to copy everything by hand if I want to reproduce :) Also, if you call |
|
no i was not aware of that ("adding a fresh, untrained LoRA adapter") so now i changed the code based this information but i go the same behavior as you can see here is text code |
|
Okay, thanks for trying. Since you use some private adapters, I can't really reproduce, unless you can share your adapters. One thing to try out would be to use PEFT to load the adapter, not transformers: from peft import PeftModel
# instead of base_model.load_adapter
model = PeftModel.from_pretrained(base_model, peft_adapter1_output_dir, adapter_name="adapter_1")
model.load_adapter(peft_adapter2_output_dir, adapter_name="adapter_2")Regarding the error with from peft import get_model_status, get_layer_status
# after base_model.load_adapter
get_layer_status(base_model)
get_model_status(base_model)and paste the results here. |
|
thank you for your prompt responses I tried what you mentioned for this part : it gave me this output Regardless of my private adapters is this approach working with you on any adapter you can access? |
Yes, it's working, here is a simple test: import torch
from peft import LoraConfig, get_peft_model
from transformers import AutoModelForCausalLM, AutoTokenizer
torch.manual_seed(0)
model_id = "facebook/opt-125m"
model = AutoModelForCausalLM.from_pretrained(model_id)
input = torch.tensor([[1, 2, 3, 4, 5]])
output_base = model(input).logits
print("Base model output:")
print(output_base[0, :3, :5])
# create a PEFT model with 2 adapters and save it
config = LoraConfig(r=8, init_lora_weights=False)
model = get_peft_model(model, config, adapter_name="adapter1")
model.add_adapter("adapter2", config)
model.save_pretrained("/tmp/issue-1802")
# load the model again
del model
model = AutoModelForCausalLM.from_pretrained("/tmp/issue-1802/adapter1", adapter_name="adapter1")
model.load_adapter("/tmp/issue-1802/adapter2", "adapter2")
model.set_adapter("adapter1")
output_adapter1 = model(input).logits
print("Model output after loading adapter1:")
print(output_adapter1[0, :3, :5])
model.set_adapter("adapter2")
output_adapter2 = model(input).logits
print("Model output after setting adapter2:")
print(output_adapter2[0, :3, :5])This prints: Note that when you want to compare model outputs, looking at the generated tokens is not reliable. When the difference in logits is small, the generated tokens can be the same, even if the outputs are different. Therefore, it's better to check the logits directly, as in my example. |
|
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. |




I am experiencing the same issues when switching between multiple adapters despite following the documentation and checking this #1315 (@BenjaminBossan)I do not observe any change in the model’s behavior when switching between the adapters.
When I switch between the adapters using the set_adapter method, there is no observable change in the model’s behavior. The outputs remain the same, regardless of which adapter is active.
I suspect that the set_adapter method does not actually activate the specified adapter correctly. Instead, I notice a change in behavior only when I merge the adapter with the base model
The documentation does not mention the need to perform a merge when switching adapters. Additionally, the methods add_adapter, set_adapter, and enable_adapters do not appear to work
Please provide clarification on how to correctly switch between adapters
The text was updated successfully, but these errors were encountered: