Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BUG with param fan_in_fan_out in LoraConfig #324

Closed
EeyoreLee opened this issue Apr 18, 2023 · 2 comments
Closed

BUG with param fan_in_fan_out in LoraConfig #324

EeyoreLee opened this issue Apr 18, 2023 · 2 comments

Comments

@EeyoreLee
Copy link

A harmless warning here.

                    else:
                        in_features, out_features = target.in_features, target.out_features
                        if kwargs["fan_in_fan_out"]:
                            warnings.warn(
                                "fan_in_fan_out is set to True but the target module is not a Conv1D. "
                                "Setting fan_in_fan_out to False."
                            )
                            kwargs["fan_in_fan_out"] = False

Cause below

def _prepare_lora_config(peft_config, model_config):
    if peft_config.target_modules is None:
        if model_config["model_type"] not in TRANSFORMERS_MODELS_TO_LORA_TARGET_MODULES_MAPPING:
            raise ValueError("Please specify `target_modules` in `peft_config`")
        peft_config.target_modules = TRANSFORMERS_MODELS_TO_LORA_TARGET_MODULES_MAPPING[model_config["model_type"]]
    if len(peft_config.target_modules) == 1:
        peft_config.fan_in_fan_out = True
        peft_config.enable_lora = [True, False, True]
    if peft_config.inference_mode:
        peft_config.merge_weights = True
    return peft_config

if length of target_modules is only one and the target_modules isn't Conv1d, there is a warning UserWarning: fan_in_fan_out is set to True but the target module is not a Conv1D. Setting fan_in_fan_out to False. in peft/tuners/lora.py:173. like some models based Transformer may use a q_k_v(one linear or dense) instead of q k v (three linear or dense), then the target_modules is only one, and for using regex not fullmatch, I must setup target_modules=["q_k_v"] not target_modules="q_k_v".

            if isinstance(self.peft_config.target_modules, str):
                target_module_found = re.fullmatch(self.peft_config.target_modules, key)
            else:
                target_module_found = any(key.endswith(target_key) for target_key in self.peft_config.target_modules)

therefore, the paramfan_in_fan_out in LoraConfig must be disable in the thread, maybe there is a modify needed.

@pacman100
Copy link
Collaborator

Hello @EeyoreLee, could you try the main branch and let us know if that solves the issue

@EeyoreLee
Copy link
Author

Hello @EeyoreLee, could you try the main branch and let us know if that solves the issue

excellent work! yep, it fixed. ty

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants