You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I want to use in-depth functionality of lama and I want to use the refine=True parameter and i come to know that it consumes 24GB VRAM. Can some one recommend any ec2 instance with handsome gpu and good memory . That ec2 instance GPU should be cuda enabled . I have seen G Series but it is so confusing ? My ultimate goal is to just do the predictions with refine=True to see the results. @windj007 @cohimame @Sanster
The text was updated successfully, but these errors were encountered:
I'd recommend first allocating a big instance with a lot of VRAM, try what you want and see how much it consumes - and then find the minimal instance that fits.
@hamzanaeem1999 AOA Hamza, Have you tested the Lama on EC2 ... Please tell me how much resources are required to run Lama with refinement and which EC2 GPU instance you chose?
I want to use in-depth functionality of lama and I want to use the refine=True parameter and i come to know that it consumes 24GB VRAM. Can some one recommend any ec2 instance with handsome gpu and good memory . That ec2 instance GPU should be cuda enabled . I have seen G Series but it is so confusing ? My ultimate goal is to just do the predictions with refine=True to see the results.
@windj007
@cohimame
@Sanster
The text was updated successfully, but these errors were encountered: