Skip to content

Conversation

Kyle0654
Copy link
Contributor

Subgraphs are now functional. They are stored in a graph library, and are automatically added to the CLI by name. They can expose inputs and outputs when used in the CLI (some more work will need to be done to support this in the UI - namely, storing some reference to the graph library graph somewhere on the GraphNode, or with the Graph).

@lstein lstein self-requested a review April 12, 2023 21:13
@psychedelicious
Copy link
Collaborator

Am I doing it wrong?

invoke> noise | t2l --prompt dog | l2l --prompt cat
source=EdgeConnection(node_id='0', field='noise') destination=EdgeConnection(node_id='1', field='noise')
source=EdgeConnection(node_id='1', field='latents') destination=EdgeConnection(node_id='2', field='latents')
* Warning: '' is not a valid model name. Using default model instead.
>> Loading diffusers model from runwayml/stable-diffusion-v1-5
   | Using faster float16 precision
   | Loading diffusers VAE from stabilityai/sd-vae-ft-mse
Fetching 15 files: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 15/15 [00:00<00:00, 19187.12it/s]
   | Default image dimensions = 512 x 512
>> Loading embeddings from /home/bat/invokeai/embeddings
   | Loading v4 embedding file: terence/learned_embeds.bin
   ** Notice: terence/learned_embeds.bin was trained on a model with an incompatible token dimension: 768 vs 1024.
>> Textual inversion triggers:
>> Model loaded in 6.25s
>> Max VRAM used to load the model: 2.16G
>> Current VRAM usage:2.16G

>> [TOKENLOG] Parsed Prompt: FlattenedPrompt:[Fragment:'dog'@1.0]

>> [TOKENLOG] Parsed Negative Prompt: FlattenedPrompt:[Fragment:''@1.0]

>> [TOKENLOG] Tokens  (1):
dog
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:00<00:00, 11.19it/s]
Error in node 18a25a45-ff5f-4ac6-b855-8ead0999a876 (source node 2): Traceback (most recent call last):
  File "/home/bat/Documents/Code/InvokeAI/invokeai/app/services/processor.py", line 54, in __process
    outputs = invocation.invoke(
  File "/home/bat/Documents/Code/InvokeAI/invokeai/app/invocations/latent.py", line 331, in invoke
    noise = context.services.latents.get(self.noise.latents_name)
AttributeError: 'NoneType' object has no attribute 'latents_name'

Session error: creating a new session

@Kyle0654
Copy link
Contributor Author

Hrm.... I guess latentstolatents needs more testing =/

Copy link
Collaborator

@lstein lstein left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm approving this in the interests of moving things along. @Kyle0654 will you fix the latents2latents invocation before merging?

@Kyle0654
Copy link
Contributor Author

LatentsToLatents is fixed (slightly different problem than @psychedelicious noticed, which was usage-related and not a bug). Usage is like so (using t2l to produce a base image):

noise --seed 10 | t2l --prompt "an old man" | l2l --prompt "an old dog" --strength 0.8 --link -2 noise noise | l2i | show_image

@Kyle0654 Kyle0654 enabled auto-merge (squash) April 14, 2023 05:34
@Kyle0654 Kyle0654 requested a review from mauwii as a code owner April 14, 2023 06:17
@Kyle0654 Kyle0654 merged commit 23d65e7 into main Apr 14, 2023
@Kyle0654 Kyle0654 deleted the kyle0654/node_graph_library branch April 14, 2023 06:41
@psychedelicious
Copy link
Collaborator

Confirmed working

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants