We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
huggingface/peft#39
此 PR 通过能够共享经过训练的 LoRA 权重和配置来添加一些增强功能.使用此 PR,API 如下所示:peft
from transformers import AutoModelForCausalLM from peft import LoraConfig, LoraModel
model_id = "facebook/opt-350m" lora_model_id = "./temp-lora"
config = LoraConfig( r=16, lora_alpha=32, target_modules=["q_proj", "v_proj"], lora_dropout=0.05, bias="none", ) config.save_pretrained(lora_model_id)
config = LoraConfig.from_pretrained(lora_model_id) model = AutoModelForCausalLM.from_pretrained(model_id)
model = LoraModel(config, model)
model.save_pretrained(lora_model_id)
model = AutoModelForCausalLM.from_pretrained(model_id) model = LoraModel.from_pretrained(model, lora_model_id) 此公关补充说:
from_pretrained支持和xxConfigLoraModel save_pretrained支持和xxConfigLoraModel
以下是我在集线器上推送的推送适配器权重的示例: https://huggingface.co/ybelkada/test-opt-lora/tree/main,您可以使用以下内容在此 PR 中加载:facebook/opt-350m
from huggingface_hub.repocard import RepoCard
lora_model_id = "ybelkada/test-opt-lora" card = RepoCard.load(lora_model_id) model_id = card.data.to_dict()["base_model"]
model = LoraModel.from_pretrained(model, lora_model_id) 抄送@sayakpaul@pacman100此 PR 现已准备好进行审核
The text was updated successfully, but these errors were encountered:
No branches or pull requests
huggingface/peft#39
此 PR 通过能够共享经过训练的 LoRA 权重和配置来添加一些增强功能.使用此 PR,API 如下所示:peft
from transformers import AutoModelForCausalLM
from peft import LoraConfig, LoraModel
model_id = "facebook/opt-350m"
lora_model_id = "./temp-lora"
Create a config
config = LoraConfig(
r=16,
lora_alpha=32,
target_modules=["q_proj", "v_proj"],
lora_dropout=0.05,
bias="none",
)
config.save_pretrained(lora_model_id)
Load the config
config = LoraConfig.from_pretrained(lora_model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
Load and save the model
model = LoraModel(config, model)
save the adapters only --> here only 6MB
model.save_pretrained(lora_model_id)
Load from saved model
model = AutoModelForCausalLM.from_pretrained(model_id)
model = LoraModel.from_pretrained(model, lora_model_id)
此公关补充说:
from_pretrained支持和xxConfigLoraModel
save_pretrained支持和xxConfigLoraModel
以下是我在集线器上推送的推送适配器权重的示例: https://huggingface.co/ybelkada/test-opt-lora/tree/main,您可以使用以下内容在此 PR 中加载:facebook/opt-350m
from transformers import AutoModelForCausalLM
from peft import LoraConfig, LoraModel
from huggingface_hub.repocard import RepoCard
lora_model_id = "ybelkada/test-opt-lora"
card = RepoCard.load(lora_model_id)
model_id = card.data.to_dict()["base_model"]
Load the config & model
config = LoraConfig.from_pretrained(lora_model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
Load the Lora model
model = LoraModel.from_pretrained(model, lora_model_id)
抄送@sayakpaul@pacman100此 PR 现已准备好进行审核
The text was updated successfully, but these errors were encountered: