-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cannot get_mask
when I vary the cuda device
#6
Comments
also I guess I loaded the model to the expected cuda |
Hi, thanks for the interest in our work! Interestingly, I met a similar problem when I first played with the ODISE repo😂. And here's the issue I raised in their repo. I ended up getting around this problem by setting the Best, |
Hello Junyi, GREAT JOB! It seems that everything works well when calling
get_features
inextractor_sd.py
usingcuda:3
but the inference process failed even I change
def inference(model, aug, image, vocab, label_list):
from
demo = StableDiffusionSeg(inference_model, demo_metadata, aug)
pred = demo.predict(np.array(image))
to
demo = StableDiffusionSeg(inference_model, demo_metadata, aug)
demo.model = demo.model.to(torch.device("cuda:3"))
pred = demo.predict(np.array(image))
I guess the main problem lies in wrongly loading the decoder part of the model, but I'm not sure how to fix it.
The text was updated successfully, but these errors were encountered: