Skip to content

Conversation

@lstein
Copy link
Collaborator

@lstein lstein commented Oct 27, 2022

This PR corrects the call signature for do_diffusion_step() and enhances the description of using prompt2prompt in the documentation.

@lstein lstein requested a review from Kyle0654 October 27, 2022 12:32
@lstein
Copy link
Collaborator Author

lstein commented Oct 27, 2022

@damian0815:

There's a problem when passing an empty prompt ("") to the prompt parser. I get this stack tracec:

│ /u/lstein/projects/SD/InvokeAI/./scripts/invoke.py:880 in <module>                               │
│                                                                                                  │
│   877 ######################################                                                     │
│   878                                                                                            │
│   879 if __name__ == '__main__':                                                                 │
│ ❱ 880 │   main()                                                                                 │
│   881                                                                                            │
│                                                                                                  │
│ /u/lstein/projects/SD/InvokeAI/./scripts/invoke.py:106 in main                                   │
│                                                                                                  │
│   103 │   │   )                                                                                  │
│   104 │                                                                                          │
│   105 │   try:                                                                                   │
│ ❱ 106 │   │   main_loop(gen, opt)                                                                │
│   107 │   except KeyboardInterrupt:                                                              │
│   108 │   │   print("\ngoodbye!")                                                                │
│   109                                                                                            │
│                                                                                                  │
│ /u/lstein/projects/SD/InvokeAI/./scripts/invoke.py:332 in main_loop                              │
│                                                                                                  │
│   329 │   │   │   │   catch_ctrl_c = infile is None # if running interactively, we catch keybo   │
│   330 │   │   │   │   opt.last_operation='generate'                                              │
│   331 │   │   │   │   try:                                                                       │
│ ❱ 332 │   │   │   │   │   gen.prompt2image(                                                      │
│   333 │   │   │   │   │   │   image_callback=image_writer,                                       │
│   334 │   │   │   │   │   │   step_callback=step_callback,                                       │
│   335 │   │   │   │   │   │   catch_interrupts=catch_ctrl_c,                                     │
│                                                                                                  │
│ /u/lstein/projects/SD/InvokeAI/ldm/generate.py:417 in prompt2image                               │
│                                                                                                  │
│    414 │   │   mask_image = None                                                                 │
│    415 │   │                                                                                     │
│    416 │   │   try:                                                                              │
│ ❱  417 │   │   │   uc, c, extra_conditioning_info = get_uc_and_c_and_ec(                         │
│    418 │   │   │   │   prompt, model =self.model,                                                │
│    419 │   │   │   │   skip_normalize=skip_normalize,                                            │
│    420 │   │   │   │   log_tokens    =self.log_tokenization                                      │
│                                                                                                  │
│ /u/lstein/projects/SD/InvokeAI/ldm/invoke/conditioning.py:65 in get_uc_and_c_and_ec              │
│                                                                                                  │
│    62 │   │   │   this_embedding, _ = build_embeddings_and_tokens_for_flattened_prompt(model,    │
│    63 │   │   │   embeddings_to_blend = this_embedding if embeddings_to_blend is None else tor   │
│    64 │   │   │   │   (embeddings_to_blend, this_embedding))                                     │
│ ❱  65 │   │   conditioning = WeightedFrozenCLIPEmbedder.apply_embedding_weights(embeddings_to_   │
│    66 │   │   │   │   │   │   │   │   │   │   │   │   │   │   │   │   │   │   │   │   blend.we   │
│    67 │   │   │   │   │   │   │   │   │   │   │   │   │   │   │   │   │   │   │   │   normaliz   │
│    68 │   else:                                                                                  │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
AttributeError: 'NoneType' object has no attribute 'unsqueeze'


@damian0815
Copy link
Contributor

damian0815 commented Oct 27, 2022

empty prompt crash is fixed in dc86fc92ce99af7c0f850a14bc4de845dd917c05. i have a unit test for it but including legacy blend support bypasses that code hah

@lstein lstein merged commit 16e7cbd into development Oct 27, 2022
@lstein lstein deleted the prompt-tweaks branch October 27, 2022 21:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants