We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hello, I am trying to add Prompt extension and weighting by slightly modifying the Stable Diffusion Pipeline. I do this by replacing the pipeline._encode_prompt with lpw_pipe._encode_prompt. This is the lpw script: https://gist.github.com/chavinlo/b7ebc7e7dea59e311dab564fd452ff3c#file-lpw-py-L393
pipeline._encode_prompt
lpw_pipe._encode_prompt
import oneflow as torch import torch as og_torch from .lpw import LongPromptWeightingPipeline #load the text_model and tokenizer to be used on LPW text_model = CLIPTextModel.from_pretrained(default_model, subfolder="text_encoder") tokenizer_model = CLIPTokenizer.from_pretrained(default_model, subfolder="tokenizer") text_model = text_model.to("cuda") lpw_pipe = LongPromptWeightingPipeline(text_model, tokenizer_model, prompt_multiplier) ... #Here I load multiple models from a configuration file. pipe_map = dict() for model in config['models']: print("Loading model:", model['model_path']) tmp_pipe = OneFlowStableDiffusionPipeline.from_pretrained( pretrained_model_name_or_path=model['model_path'], use_auth_token=True, torch_dtype=torch.float16 ) tmp_pipe.to("cuda") tmp_pipe._encode_prompt = lpw_pipe._encode_prompt tmp_pipe.enable_graph_share_mem() tmp_prompt = "Anime girl, beautiful" tmp_neg_prompt = "Disgusting, Horrible" for resolution in resultant_resolutions: print("Doing resolution:", resolution) with torch.autocast("cuda"): tmp_pipe( prompt=tmp_prompt, negative_prompt=tmp_neg_prompt, height=resolution[1], width=resolution[0] ) pipe_map[model['alias']] = tmp_pipe
In normal circustances it exits due to assertionerror on assert og_torch.cuda.is_initialized() is False @ https://github.com/Oneflow-Inc/diffusers/blob/oneflow-fork/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_oneflow.py#L709
assert og_torch.cuda.is_initialized() is False
If this assertion is removed, it goes through but uses 3 times the VRAM per resolution round.
Heres the complete script: https://gist.github.com/chavinlo/d8005ebda6499853891c9edae8765b4b
The text was updated successfully, but these errors were encountered:
Hello?
Sorry, something went wrong.
This is exactly what the assertion is trying to prevent. If you are running oneflow with torch, more VRAM is required for two CUDA contexts.
No branches or pull requests
Hello, I am trying to add Prompt extension and weighting by slightly modifying the Stable Diffusion Pipeline.
I do this by replacing the
pipeline._encode_prompt
withlpw_pipe._encode_prompt
.This is the lpw script: https://gist.github.com/chavinlo/b7ebc7e7dea59e311dab564fd452ff3c#file-lpw-py-L393
In normal circustances it exits due to assertionerror on
assert og_torch.cuda.is_initialized() is False
@ https://github.com/Oneflow-Inc/diffusers/blob/oneflow-fork/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_oneflow.py#L709If this assertion is removed, it goes through but uses 3 times the VRAM per resolution round.
Heres the complete script: https://gist.github.com/chavinlo/d8005ebda6499853891c9edae8765b4b
The text was updated successfully, but these errors were encountered: