Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stable Diffusion Model never loads. #2

Closed
esinanturan opened this issue Aug 5, 2023 · 7 comments
Closed

Stable Diffusion Model never loads. #2

esinanturan opened this issue Aug 5, 2023 · 7 comments

Comments

@esinanturan
Copy link

esinanturan commented Aug 5, 2023

Hi and thank you for this great idea of making SD available in react-native with iOS.
I have tried to run the example you have given in the repository in my iPhone 13 Pro Max but I do not get any response from 'loadModel' method it awaits there forever no errors,warnings or whatsoever. its stuck at loading model log
What am I missing ? @andrei-zgirvaci

const MODEL_PATH = FileSystem.documentDirectory + "coreml-stable-diffusion-2-1";
const SAVE_PATH = FileSystem.cacheDirectory + "image.jpeg";

export default function App() {

  const loadModel = async () => {
    console.log("loading model");
    await ExpoStableDiffusion.loadModel(MODEL_PATH);
    console.log("loaded model");
  };

  const generateImage = async () => {
    console.log("generating");
    await ExpoStableDiffusion.generateImage({
      prompt: "a photo of an astronaut riding a horse on mars",
      stepCount: 25,
      savePath: SAVE_PATH,
    });
    console.log("generated");
  };

  return (
    <View style={styles.container}>
      <Text>Testing Expo Stable Diffusion</Text>

      <Button onPress={loadModel} title="Load the Model" />

      <Button onPress={generateImage} title="Generate the image" />
    </View>
  );
}

@andrei-zgirvaci
Copy link
Owner

andrei-zgirvaci commented Aug 9, 2023

Hi @esinanturan, thanks for opening the issue!

There was a bug regarding how the path was parsed in Swift. I believe it's related to this issue: #3 which should be fixed now in 0.1.2.

Could you update the package and give it another try?

@esinanturan
Copy link
Author

@andrei-zgirvaci I have tried on my device (iPhone 13 Pro Max) again but still no success. Its stuck at loading the model.
I couldn't make it work on simulator too it gives this error there;
image

@andrei-zgirvaci
Copy link
Owner

andrei-zgirvaci commented Aug 9, 2023

@esinanturan in the simulator it looks like the model actually loaded, you can see the console output saying loaded model.

Regarding the CoreML warning: Failed to get the home directory when checking model path., you can safely ignore it.

Give it a few more minutes and you should see Steps: 0... being outputted in the console and the image being generated.

Try to run the app from Xcode on your iPhone 13 Pro Max and see if you get the console: loaded model, it might just take longer because it's running on a smaller chip than on MacBook.

I will also add a new print in the console before generating the image, to show that image is about to be generated!

Also, make sure that the GeneratedImages folder exists. You can try to change it with the following line:

const SAVE_PATH = FileSystem.documentDirectory + 'image.jpeg';

See more info in this issue: #3

@esinanturan
Copy link
Author

esinanturan commented Aug 9, 2023

@andrei-zgirvaci On simulator I am getting the generating response and it stucks there.
As you can see simulator using 2G of ram and doing stuff i guess but not getting response.
image

I have changed the save path to "FileSystem.documentDirectory + 'image.jpeg';" also but still no success

@esinanturan
Copy link
Author

esinanturan commented Aug 9, 2023

@andrei-zgirvaci I have finally succeed generating image on my device ( but no success on simulator ).
But it took 15-20 minutes to load the model that is not efficient for any use case. Can we do anything in the native side to make it load fast ?

Here is the result:
image

And it took quite short time to generate the image like 10-15 seconds. But loading the model is a nightmare : )

Edit: Well actually after first load now it takes around 5 seconds to load the model. But first load was quite hard we need to improve that or it was a one time thing I am not sure. I will try to re-install it.

I have tried re-installing and it takes 10 minutes in the first modal load. This is quite a lot for such action to be efficient. I hope we can improve that.

Thank you for your effort @andrei-zgirvaci

@andrei-zgirvaci
Copy link
Owner

andrei-zgirvaci commented Aug 9, 2023

@esinanturan Nice, I am glad you managed to make it work on your physical device. The load time is to be expected tho, however, there are some things we can do to optimize the model loading time!

I also have documented more about this in my blog post:
https://andreizgirvaci.com/blog/how-to-create-ai-generated-images-on-ios-in-react-native-using-stable-diffusion#running-stable-diffusion-on-lower-end-devices

If you have the latest iOS 17 installed, you have some options!

First, I would try to convert a base 16-bit SD model to an 6-bit palettized model model instead which should improve the loading and image generation time!

You can convert a model to 6-bit instead of 16-bit by specifying the --quantize-nbits argument like so:

python -m python_coreml_stable_diffusion.torch2coreml \
  --model-version stabilityai/stable-diffusion-2-1-base \
  --convert-unet \
  --convert-text-encoder \
  --convert-vae-decoder \
  --convert-safety-checker \
  --quantize-nbits 6 \
  --chunk-unet \
  --attention-implementation SPLIT_EINSUM_V2 \
  --compute-unit ALL \
  --bundle-resources-for-swift-cli \
  -o models/stable-diffusion-2-1/split_einsum_v2/compiled

Another option is to try and experiment with running the model on the .cpuAndGPU instead of currently the .cpuAndNeuralEngine and see if it makes a difference.

Currently, there is no way to change this option from expo-stable-diffusion, but you can change the code directly in your node_modules, rebuild the iOS app and see if it makes a difference.

To do so, you have to change this line:

config.computeUnits = .cpuAndNeuralEngine

to:

config.computeUnits = .cpuAndGPU

Let me know on your findings. If .cpuAndGPU seems to work better, I will add the ability to switch that option when generating an image.

Hope this helps!

@andrei-zgirvaci
Copy link
Owner

I will mark this issue as completed for now as the described problem was fixed.

In case you encounter other issues regarding this, let me know!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants