Skip to content

Conversation

okotaku
Copy link
Contributor

@okotaku okotaku commented Nov 6, 2023

What does this PR do?

Issues #5714

import torch
from diffusers import DiffusionPipeline, UNet2DConditionModel, AutoencoderKL
from peft import LoraConfig, get_peft_model

prompt = 'a man'

unet = UNet2DConditionModel.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0", subfolder='unet', torch_dtype=torch.float16)
unet = get_peft_model(unet, LoraConfig(target_modules=["to_q", "to_v", "query", "value"]))
vae = AutoencoderKL.from_pretrained(
    'madebyollin/sdxl-vae-fp16-fix',
    torch_dtype=torch.float16,
)
pipe = DiffusionPipeline.from_pretrained(
    'stabilityai/stable-diffusion-xl-base-1.0', unet=unet, vae=vae, torch_dtype=torch.float16)
pipe.to('cuda')

image = pipe(
    prompt,
    num_inference_steps=50,
    width=1024,
    height=1024,
).images[0]
image.save('demo.png')

This code raised following error.

 ValueError: PeftModel() is of type: <class 'peft.peft_model.PeftModel'>, but should be <class 'diffusers.models.modeling_utils.ModelMixin'>

I solved it.

With this PR, we can fix the following lines to pass the "unet" directory to the pipeline.

https://github.com/huggingface/peft/blob/main/examples/lora_dreambooth/train_dreambooth.py#L992-L1000

pipeline = DiffusionPipeline.from_pretrained(
                        args.pretrained_model_name_or_path,
                        unet=unet,
                        text_encoder=text_encoder,
                        safety_checker=None,
                        revision=args.revision,
                    )

Before submitting

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@HuggingFaceDocBuilderDev
Copy link

HuggingFaceDocBuilderDev commented Nov 6, 2023

The documentation is not available anymore as the PR was closed or merged.

@DN6
Copy link
Collaborator

DN6 commented Nov 9, 2023

cc: @sayakpaul @younesbelkada

@yiyixuxu
Copy link
Collaborator

yiyixuxu commented Nov 9, 2023

@sayakpaul could you take a look here?

Copy link
Contributor

@younesbelkada younesbelkada left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In principle this looks good! I left one comment, IMO this PR should be agnostic to PEFT integration and fixes issues when users pass a PeftModel. Not sure also how much this is in the scope of diffusers as we might want users to use the PEFT integration through the low level API rather than using PeftModel interface

Comment on lines 296 to 297
if USE_PEFT_BACKEND:
from peft import PeftModel
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure if this condition is needed, I would rather do something like:

Suggested change
if USE_PEFT_BACKEND:
from peft import PeftModel
if is_peft_available() and isinstance(sub_model, PeftModel):
from peft import PeftModel

As we don't use PeftModel for the PEFT integration, it should be totally agnostic to the minimum PEFT version we require for the integration.

@younesbelkada
Copy link
Contributor

You could also do something different by exposing a method unwrap_model that takes care of the whole logic if the model is a peft model, etc. and use that method that can be also copied over different classes / modules

@okotaku
Copy link
Contributor Author

okotaku commented Nov 9, 2023

In the training code in the examples, unet is passed to the pipeline when the validation is run.

https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_lora_sdxl.py#L1146

It would be nice to be able to pass the PeftModel directly to the pipeline at that time, eliminating the overhead for the user.

@sayakpaul
Copy link
Member

We'll soon start working on refactoring our training scripts to support that :-)

@younesbelkada any other comments on this PR?

@sayakpaul
Copy link
Member

With this PR, we can fix the following lines to pass the "unet" directory to the pipeline.

Hmm, I think we always want the users to load the unet first and then init a pipeline with it. I am not sure if we'd want to deviate from that design TBH. Hence, I'd like an opinion from @patrickvonplaten here.

Copy link
Contributor

@younesbelkada younesbelkada left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense thanks! I will let diffusers maintainers give their final approval
As it would be a quite common scenario to unwrap the model I would define an utility method unwrap_model

def unwrap_model(model):
       if is_compiled_module(model):
            model = model._orig_mod

       if is_peft_available() and isinstance(model, PeftModel):
            model = model.base_model.model

       return model

@patrickvonplaten
Copy link
Contributor

@okotaku could we maybe create a unwrap_model method as mentioned by @younesbelkada here: #5653 (review) We already have 3 occurances where we unwrapt the model and given that we now have a another model class wrapper it'd be good to add a utility function:

src/diffusers/pipelines/pipeline_utils.py
292:            model_cls = sub_model._orig_mod.__class__
550:                    not_compiled_module = module._orig_mod
655:                sub_model = sub_model._orig_mod

@okotaku
Copy link
Contributor Author

okotaku commented Nov 14, 2023

I have fixed.

@patrickvonplaten
Copy link
Contributor

Very nice!

@patrickvonplaten patrickvonplaten merged commit bfe94a3 into huggingface:main Nov 14, 2023
@patrickvonplaten
Copy link
Contributor

Follow-up PR: #5789

yoonseokjin pushed a commit to yoonseokjin/diffusers that referenced this pull request Dec 25, 2023
* Support maybe_raise_or_warn for peft

* fix by comment

* unwrap function
AmericanPresidentJimmyCarter pushed a commit to AmericanPresidentJimmyCarter/diffusers that referenced this pull request Apr 26, 2024
* Support maybe_raise_or_warn for peft

* fix by comment

* unwrap function
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants