You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Oct 5, 2023. It is now read-only.
I have two gtx 1070s, non-sli. Has anyone experienced running the aidungeon on two gpus? What were some of the issues or challenges? The program suggests a beefy amount of gpu ram. Would a configuration with two gpus work well?
The text was updated successfully, but these errors were encountered:
Multi GPU cannot split a model like that without a LOT of work, instead multiGPU would be used to handle multiple inputs at once (not something that this project would benefit from).
Essentially using model-parallelism is much more difficult to implement than data-parallelism and neither would work terribly well with GPT-2 without some significant changes.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
I have two gtx 1070s, non-sli. Has anyone experienced running the aidungeon on two gpus? What were some of the issues or challenges? The program suggests a beefy amount of gpu ram. Would a configuration with two gpus work well?
The text was updated successfully, but these errors were encountered: