Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError in forge, magic prompt #814

Open
tazztone opened this issue Aug 20, 2024 · 2 comments
Open

RuntimeError in forge, magic prompt #814

tazztone opened this issue Aug 20, 2024 · 2 comments

Comments

@tazztone
Copy link

not sure if this is supposed to work on forge in the first place. but when trying magic prompt i get this error:
(i got more than enough spare VRAM)

WARNING:dynamicprompts.generators.magicprompt:First load of MagicPrompt may take a while.
C:\_stability_matrix\Data\Packages\stable-diffusion-webui-forge\venv\lib\site-packages\transformers\tokenization_utils_base.py:1601: FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `True` by default. This behavior will be depracted in transformers v4.45, and will be then set to `False` by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884
  warnings.warn(
Truncation was not explicitly activated but `max_length` is provided a specific value, please use `truncation=True` to explicitly truncate examples to max length. Defaulting to 'longest_first' truncation strategy. If you encode pairs of sequences (GLUE-style) with the tokenizer you can select this strategy more precisely by providing a specific strategy to `truncation`.
*** Error running process: C:\_stability_matrix\Data\Packages\stable-diffusion-webui-forge\extensions\sd-dynamic-prompts\scripts\dynamic_prompting.py
    Traceback (most recent call last):
      File "C:\_stability_matrix\Data\Packages\stable-diffusion-webui-forge\modules\scripts.py", line 844, in process
        script.process(p, *script_args)
      File "C:\_stability_matrix\Data\Packages\stable-diffusion-webui-forge\extensions\sd-dynamic-prompts\sd_dynamic_prompts\dynamic_prompting.py", line 480, in process
        all_prompts, all_negative_prompts = generate_prompts(
      File "C:\_stability_matrix\Data\Packages\stable-diffusion-webui-forge\extensions\sd-dynamic-prompts\sd_dynamic_prompts\helpers.py", line 93, in generate_prompts
        all_prompts = prompt_generator.generate(prompt, num_prompts, seeds=seeds) or [""]
      File "C:\_stability_matrix\Data\Packages\stable-diffusion-webui-forge\venv\lib\site-packages\dynamicprompts\generators\magicprompt.py", line 169, in generate
        magic_prompts = self._generate_magic_prompts(prompts)
      File "C:\_stability_matrix\Data\Packages\stable-diffusion-webui-forge\extensions\sd-dynamic-prompts\sd_dynamic_prompts\magic_prompt.py", line 32, in _generate_magic_prompts
        for prompt in super()._generate_magic_prompts(list(orig_prompts))
      File "C:\_stability_matrix\Data\Packages\stable-diffusion-webui-forge\venv\lib\site-packages\dynamicprompts\generators\magicprompt.py", line 215, in _generate_magic_prompts
        prompts = self._generator(
      File "C:\_stability_matrix\Data\Packages\stable-diffusion-webui-forge\venv\lib\site-packages\transformers\pipelines\text_generation.py", line 262, in __call__
        return super().__call__(text_inputs, **kwargs)
      File "C:\_stability_matrix\Data\Packages\stable-diffusion-webui-forge\venv\lib\site-packages\transformers\pipelines\base.py", line 1238, in __call__
        outputs = list(final_iterator)
      File "C:\_stability_matrix\Data\Packages\stable-diffusion-webui-forge\venv\lib\site-packages\transformers\pipelines\pt_utils.py", line 124, in __next__
        item = next(self.iterator)
      File "C:\_stability_matrix\Data\Packages\stable-diffusion-webui-forge\venv\lib\site-packages\transformers\pipelines\pt_utils.py", line 125, in __next__
        processed = self.infer(item, **self.params)
      File "C:\_stability_matrix\Data\Packages\stable-diffusion-webui-forge\venv\lib\site-packages\transformers\pipelines\base.py", line 1164, in forward
        model_outputs = self._forward(model_inputs, **forward_params)
      File "C:\_stability_matrix\Data\Packages\stable-diffusion-webui-forge\venv\lib\site-packages\transformers\pipelines\text_generation.py", line 351, in _forward
        generated_sequence = self.model.generate(input_ids=input_ids, attention_mask=attention_mask, **generate_kwargs)
      File "C:\_stability_matrix\Data\Packages\stable-diffusion-webui-forge\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "C:\_stability_matrix\Data\Packages\stable-diffusion-webui-forge\venv\lib\site-packages\transformers\generation\utils.py", line 2024, in generate
        result = self._sample(
      File "C:\_stability_matrix\Data\Packages\stable-diffusion-webui-forge\venv\lib\site-packages\transformers\generation\utils.py", line 2982, in _sample
        outputs = self(**model_inputs, return_dict=True)
      File "C:\_stability_matrix\Data\Packages\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\_stability_matrix\Data\Packages\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\_stability_matrix\Data\Packages\stable-diffusion-webui-forge\venv\lib\site-packages\transformers\models\gpt2\modeling_gpt2.py", line 1315, in forward
        transformer_outputs = self.transformer(
      File "C:\_stability_matrix\Data\Packages\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\_stability_matrix\Data\Packages\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\_stability_matrix\Data\Packages\stable-diffusion-webui-forge\venv\lib\site-packages\transformers\models\gpt2\modeling_gpt2.py", line 1027, in forward
        inputs_embeds = self.wte(input_ids)
      File "C:\_stability_matrix\Data\Packages\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\_stability_matrix\Data\Packages\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\_stability_matrix\Data\Packages\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\sparse.py", line 163, in forward
        return F.embedding(
      File "C:\_stability_matrix\Data\Packages\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\functional.py", line 2264, in embedding
        return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
    RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select)

---
@tazztone
Copy link
Author

claude 3.5 told me how to fix it

  1. i asked how do i fix this error?.
  2. it told me to mod the file magic_prompt.py.
  3. i pasted the file's content and asked how to modify to fix it.

amazingly it gave me this working code to replace with:

import re
from itertools import zip_longest
import torch
from dynamicprompts.generators.magicprompt import MagicPromptGenerator
from sd_dynamic_prompts.special_syntax import (
    append_chunks,
    remove_a1111_special_syntax_chunks,
)

def massage_prompt(prompt: str) -> str:
    # Coalesce repeated punctuation to a single instance
    prompt = re.sub(r"([.,])\1+", r"\1", prompt)
    # Remove leading/trailing whitespace
    prompt = prompt.strip()
    return prompt

class SpecialSyntaxAwareMagicPromptGenerator(MagicPromptGenerator):

    """
    Magic Prompt generator that is aware of A1111 special syntax (LoRA, hypernet, etc.).
    """
    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
        self.model.to(self.device)

    def _generate_magic_prompts(self, orig_prompts: list[str]) -> list[str]:
        orig_prompts, chunks = zip(
            *(remove_a1111_special_syntax_chunks(p) for p in orig_prompts),
        )
        
        # Move input to the correct device
        input_ids = self.tokenizer(list(orig_prompts), return_tensors="pt", padding=True).input_ids
        input_ids = input_ids.to(self.device)
        
        # Generate the magic prompts
        with torch.no_grad():
            output = self.model.generate(
                input_ids,
                max_length=100,
                num_return_sequences=1,
                no_repeat_ngram_size=2
            )
        
        # Decode the output
        magic_prompts = self.tokenizer.batch_decode(output, skip_special_tokens=True)
        
        # Massage the prompts
        magic_prompts = [massage_prompt(prompt) for prompt in magic_prompts]
        
        # Append chunks and return
        return [
            append_chunks(prompt, chunk)
            for prompt, chunk in zip_longest(magic_prompts, chunks, fillvalue=None)
        ]

so if anyone has the same error as me and wants to fix it: just overwrite/replace the above code into \stable-diffusion-webui-forge\extensions\sd-dynamic-prompts\sd_dynamic_prompts\magic_prompt.py, save it and restart forge

@chaewai
Copy link

chaewai commented Sep 1, 2024

Thanks for sharing the fix, had the same problem!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants