Replies: 1 comment
-
you could try deepspeed zero3+offload |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Just a short question, can I full train a llama2 7B or llama3 8B modell on 3xRTX3090 3x24GB=72GB vram? Or need one card with biger vram? Just because 3x3090 ideal (cheap) choice for home projects.
Beta Was this translation helpful? Give feedback.
All reactions