-
Notifications
You must be signed in to change notification settings - Fork 1k
Open
Description
Hello! I'm wondering if it's possible to use the Controlnet tile model, lllyasviel/control_v11f1e_sd15_tile, with Swift?
From the huggingface example it looks like the controlnet_conditioning_image is size (1024x1024), but when I try to run the swift code with an image larger than the expected 512x512 from SD 1.5 I get an error:
> swift run StableDiffusionSample "high quality photo of a dog on the beach" \
--negative-prompt "blur, lowres, bad anatomy, bad hands, cropped, worst quality" \
--resource-path ./mlpackages/512x512/Resources \
--controlnet LllyasvielControlV11F1ESd15Tile \
--controlnet-inputs ./inputs/dog-1024.png \
--output-path ./images \
--seed 100 \
--step-count 20 \
--image-count 1 \
--scheduler dpmpp \
--compute-units cpuAndGPU \
--strength 0.5
Building for debugging...
Build complete! (0.13s)
Loading resources and creating pipeline
(Note: This can take a while the first time using these resources)
Sampling ...
Error: Failed to obtain prediction for sample 0
I'm unsure how to go about getting the larger conditioning image into the pipeline.
@jrittvo I saw you have a bunch of converted controlnet models on hugging face. Did you have any luck getting the tile model working?
Metadata
Metadata
Assignees
Labels
No labels