You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for your interest! Currently, you can set low_resource to True in the evaluation config file eval_configs/minigpt4_eval.yaml. Besides, you may need to set the number of beams to 1 to save GPU memory in demo.py.
As discussed in [this issue](Another user in this issue, this reduces the memory usage to about 24G, although it may still OOM when the output is long. We are working to come up with a solution to make it work inside 24G memory. Will come back to you once we finish
We update the default setting of the demo and now it should load vicuna in 8 bit by default. The demo now should be able to run on single 3090 if you set the beam search width to 1(which is also the default value now)
My GPU is 3090ti, 24G. I have to use load_8_bit to load vicuna 13b. Could you can tell me how to do on MiniGPT-4?
The text was updated successfully, but these errors were encountered: