-
-
Notifications
You must be signed in to change notification settings - Fork 617
Open
Description
I'm a pytorch and mxnet user and Flux looks promising to me. I have 8 GPUs on the server and I want to train my model faster. Unfortunately, I see no document about parallel training on multiple GPUs. Is it possible to copy the model and initialize them on multiple GPUs and split the input data for them?
I find a pr #154 and it suggests that there may be difficulties on deep copy. Is there any progress now?
pemryanpemryan
Metadata
Metadata
Assignees
Labels
No labels