Skip to content

CoreML swift stable diffusion image to image generation Swiftui example for using CoreML diffusion model in macos real-time applications The example app to run text to image or image to image models GPT diffusers

License

The-Igor/coreml-stable-diffusion-swift-example

Repository files navigation

CoreML stable diffusion image generation example

The example app for running text-to-image or image-to-image models to generate images using Apple's Core ML Stable Diffusion implementation

Performance

The speed can be unpredictable. Sometimes a model will suddenly run a lot slower than before. It appears as if Core ML is trying to be smart in how to schedule things, but doesn’t always optimal.

SwiftUI example for the package

CoreML stable diffusion image generation

The concept

How to use

  1. Put at least one of your prepared split_einsum model into the local model folder (The example app supports only split_einsum models. In terms of performance split_einsum is the fastest way to get result)
  2. Pick up the model that was placed at the local folder from the list. Click update button if you added a model while app was launched
  3. Enter a prompt or pick up a picture and press "Generate" (You don't need to prepare image size manually) It might take up to a minute or two to get the result

The concept

Model set example

coreml-stable-diffusion-2-base

Documentation(API)

  • You need to have Xcode 13 installed in order to have access to Documentation Compiler (DocC)

  • Go to Product > Build Documentation or ⌃⇧⌘ D

    The concept

Case study

Deploying Transformers on the Apple Neural Engine

About

CoreML swift stable diffusion image to image generation Swiftui example for using CoreML diffusion model in macos real-time applications The example app to run text to image or image to image models GPT diffusers

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages