-
Notifications
You must be signed in to change notification settings - Fork 2.7k
[nodes] Add subgraph library, subgraph usage in CLI, and fix subgraph execution #3180
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Fix a CLI bug with multiple links per node
Am I doing it wrong? invoke> noise | t2l --prompt dog | l2l --prompt cat
source=EdgeConnection(node_id='0', field='noise') destination=EdgeConnection(node_id='1', field='noise')
source=EdgeConnection(node_id='1', field='latents') destination=EdgeConnection(node_id='2', field='latents')
* Warning: '' is not a valid model name. Using default model instead.
>> Loading diffusers model from runwayml/stable-diffusion-v1-5
| Using faster float16 precision
| Loading diffusers VAE from stabilityai/sd-vae-ft-mse
Fetching 15 files: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 15/15 [00:00<00:00, 19187.12it/s]
| Default image dimensions = 512 x 512
>> Loading embeddings from /home/bat/invokeai/embeddings
| Loading v4 embedding file: terence/learned_embeds.bin
** Notice: terence/learned_embeds.bin was trained on a model with an incompatible token dimension: 768 vs 1024.
>> Textual inversion triggers:
>> Model loaded in 6.25s
>> Max VRAM used to load the model: 2.16G
>> Current VRAM usage:2.16G
>> [TOKENLOG] Parsed Prompt: FlattenedPrompt:[Fragment:'dog'@1.0]
>> [TOKENLOG] Parsed Negative Prompt: FlattenedPrompt:[Fragment:''@1.0]
>> [TOKENLOG] Tokens (1):
dog
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:00<00:00, 11.19it/s]
Error in node 18a25a45-ff5f-4ac6-b855-8ead0999a876 (source node 2): Traceback (most recent call last):
File "/home/bat/Documents/Code/InvokeAI/invokeai/app/services/processor.py", line 54, in __process
outputs = invocation.invoke(
File "/home/bat/Documents/Code/InvokeAI/invokeai/app/invocations/latent.py", line 331, in invoke
noise = context.services.latents.get(self.noise.latents_name)
AttributeError: 'NoneType' object has no attribute 'latents_name'
Session error: creating a new session |
Hrm.... I guess latentstolatents needs more testing =/ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm approving this in the interests of moving things along. @Kyle0654 will you fix the latents2latents invocation before merging?
LatentsToLatents is fixed (slightly different problem than @psychedelicious noticed, which was usage-related and not a bug). Usage is like so (using t2l to produce a base image):
|
Confirmed working |
Subgraphs are now functional. They are stored in a graph library, and are automatically added to the CLI by name. They can expose inputs and outputs when used in the CLI (some more work will need to be done to support this in the UI - namely, storing some reference to the graph library graph somewhere on the GraphNode, or with the Graph).