Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Share GPU assumptions #41

Closed
sradc opened this issue Mar 10, 2023 · 3 comments
Closed

Share GPU assumptions #41

sradc opened this issue Mar 10, 2023 · 3 comments

Comments

@sradc
Copy link

sradc commented Mar 10, 2023

Hey, it looks like this assumes that there are 8 GPUs available. Are you able to provide a bit more info about that? (I.e. what GPUs do you run this on, and recommend running on?)

(Maybe worth adding some info on this in the readme?)

@b2zer
Copy link

b2zer commented Mar 11, 2023

Anything goes, so long as you have 70 GB of VRAM... lol

I found this fork to be super useful (which removes a lot of the models that would otherwise just give you an OOM - but at least you can use some while chatting to ChatGPT and having it create images for you!):

https://github.com/rupeshs/visual-chatgpt/tree/add-colab-support

It's good for toying around with this proof-of-concept, but you either need to pay for some cloud compute deluxe or be living in a small server room to enjoy the whole thing.

Good thing they said there will be an API soon "in a few days". Together with GPT-4 rumors turning into "next week", I guess I'll settle playing with what fits in my VRAM and then try to get my hands on the API. :-)

@sradc
Copy link
Author

sradc commented Mar 11, 2023

I did get this running in the end, on 8x NVIDIA A100 40 Gb, but various bugs prevented it from fully working (for one, the masking for inpainting wasn't working, not sure if the fork uses/fixes this model?).

Anything goes, so long as you have 70 GB of VRAM...

Not quite, I tried running it on 8x NVIDIA Tesla V100, 16 Gb. But got OOM on one of the cards when trying to generate an image. I.e. the cards need to be big enough to run the models allocated to them.

...Looking forward to the multimodal APIs coming soon, as you say.

@sradc
Copy link
Author

sradc commented Mar 18, 2023

looks like this info is now in the readme.

@sradc sradc closed this as completed Mar 18, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants