You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
else:
in_features, out_features = target.in_features, target.out_features
if kwargs["fan_in_fan_out"]:
warnings.warn(
"fan_in_fan_out is set to True but the target module is not a Conv1D. "
"Setting fan_in_fan_out to False."
)
kwargs["fan_in_fan_out"] = False
Cause below
def _prepare_lora_config(peft_config, model_config):
if peft_config.target_modules is None:
if model_config["model_type"] not in TRANSFORMERS_MODELS_TO_LORA_TARGET_MODULES_MAPPING:
raise ValueError("Please specify `target_modules` in `peft_config`")
peft_config.target_modules = TRANSFORMERS_MODELS_TO_LORA_TARGET_MODULES_MAPPING[model_config["model_type"]]
if len(peft_config.target_modules) == 1:
peft_config.fan_in_fan_out = True
peft_config.enable_lora = [True, False, True]
if peft_config.inference_mode:
peft_config.merge_weights = True
return peft_config
if length of target_modules is only one and the target_modules isn't Conv1d, there is a warning UserWarning: fan_in_fan_out is set to True but the target module is not a Conv1D. Setting fan_in_fan_out to False. in peft/tuners/lora.py:173. like some models based Transformer may use a q_k_v(one linear or dense) instead of q k v (three linear or dense), then the target_modules is only one, and for using regex not fullmatch, I must setup target_modules=["q_k_v"] not target_modules="q_k_v".
if isinstance(self.peft_config.target_modules, str):
target_module_found = re.fullmatch(self.peft_config.target_modules, key)
else:
target_module_found = any(key.endswith(target_key) for target_key in self.peft_config.target_modules)
therefore, the paramfan_in_fan_out in LoraConfig must be disable in the thread, maybe there is a modify needed.
The text was updated successfully, but these errors were encountered:
A harmless warning here.
Cause below
if length of target_modules is only one and the target_modules isn't Conv1d, there is a warning
UserWarning: fan_in_fan_out is set to True but the target module is not a Conv1D. Setting fan_in_fan_out to False.
inpeft/tuners/lora.py:173
. like some models based Transformer may use a q_k_v(one linear or dense) instead of q k v (three linear or dense), then the target_modules is only one, and for using regex not fullmatch, I must setuptarget_modules=["q_k_v"]
nottarget_modules="q_k_v"
.therefore, the param
fan_in_fan_out
inLoraConfig
must be disable in the thread, maybe there is a modify needed.The text was updated successfully, but these errors were encountered: