New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AttributeError: 'list' object has no attribute '__module__' when calling from_pretrained using file system #7612
Comments
As of my investigation, one can add this (as a fast and temporary action) at the 506. line of not_compiled_module = _unwrap_model(module)
+ if isinstance(not_compiled_module, list): return (not_compiled_module[0], not_compiled_module[1])
library = not_compiled_module.__module__.split(".")[0] |
I don't want to modify the source library as a work-around. Are you suggesting this is a bug of some kind? Or am i using the library incorrectly? The below is the code I'm using to generate this error:
|
@dferguson992 Taking a look into this. |
Hi @dferguson992 So the issue here is that When using When saving to your local directory with A potential solution to the issue. You can configure the location of the cache by passing in the import os
import torch
from diffusers import StableCascadeCombinedPipeline
MODEL_ID = "stabilityai/stable-cascade"
CACHE_DIR = os.getenv("CACHE_DIR", "my_cache_dir")
model = StableCascadeCombinedPipeline.from_pretrained(
MODEL_ID,
variant="bf16",
torch_dtype=torch.float16,
cache_dir=CACHE_DIR,
) The first time you run this snippet it will download the model to the specified cache directory. If you run it again, it won't download the model again, instead it will load the model from the |
Hey @DN6 thank you so much for your explanation, it was very helpful. This creates the "my_cache_dir" as you imply, but my goal is to package the model contents into a zip file, deploy the model onto an endpoint, and have the endpoint unzip the model contents for loading in a script. The script on my machine unzips the model tarball into a directory called "model/". When I load the model, the script calls
where model_dir is "model/". I've been trying to test this locally as well, still getting the screenshotted issue above. Even when I extend the local test to include the cache_dir, I still get the screenshotted error when I run the tests locally.
|
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the contributing guidelines are likely to be ignored. |
Hi @dferguson992 You could zip the entire cache dir and unzip on your endpoint machine/script. If you're using the HF Cache mechanism, and the models have been saved to the specified cache dir, you would need to pass in the name of the model repo id when using // local test, using the local model/ directory to load from and specifying the cache directory
model = StableCascadeCombinedPipeline.from_pretrained("stabilityai/stable-cascade",
variant="bf16",
torch_dtype=torch.float16,
cache_dir="cache_dir") If you need to move folders around, I would recommend saving the prior and decoder pipeline separately since it's a bit tricky to use the combined pipelines without making use of the HF caching mechanism. What you can try to do is download the model folders for the prior and decoder pipelines, and build the combined pipeline from the components. e.g Saving locally from diffusers import (
StableCascadeCombinedPipeline,
StableCascadeDecoderPipeline,
StableCascadePriorPipeline,
)
prior = StableCascadePriorPipeline.from_pretrained("stabilityai/stable-cascade-prior")
decoder = StableCascadePriorPipeline.from_pretrained("stabilityai/stable-cascade")
prior.save_pretained("my_local_sd_cascade_prior")
decoder.save_pretrained("my_local_sd_cascade_decoder") Loading prior = StableCascadePriorPipeline.from_pretrained("my_local_sd_cascade_prior")
decoder = StableCascadeDecoderPipeline.from_pretrained("my_local_sd_cascade_decoder")
StableCascadeCombinedPipeline.from_pretrained(
tokenizer=decoder.tokenizer,
text_encoder=decoder.text_encoder,
decoder=decoder.decoder,
scheduler=decoder.scheduler,
vqgan=decoder.vqgan,
prior_prior=prior.prior,
prior_text_encoder=prior.text_encoder,
prior_tokenizer=prior.tokenizer,
prior_scheduler=prior.scheduler,
prior_feature_extractor=prior.feature_extractor,
prior_image_encoder=prior.image_encoder,
) |
Discussed in #7610
Originally posted by dferguson992 April 8, 2024
I downloaded the stabilityai/stable-cascade model from HuggingFace and saved it to my file system using the following:
My plan is to deploy this to a SageMaker endpoint but am having trouble re-loading the model from the local file system. I keep getting the following error when I call
model = StableCascadeCombinedPipeline.from_pretrained("model/", variant="bf16", torch_dtype=torch.float16)
:Been at this for several days now, not sure what I'm doing wrong.
The text was updated successfully, but these errors were encountered: