New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to use the Graph2Seq model with multiple GPUs? #481
Comments
Yes. |
@AlanSwift Can you please point me in the direction of how can I train Graph2Seq in a multiple GPU environment? Do you have any such examples? |
@AlanSwift is there a configuration parameter that I have set? Wondering is there an example? |
We are sorry that currently we don't have an example for multiple GPUs. You might refer to the pytorch guidance. |
@AlanSwift wondering do you plan to add an example for multiple GPUs? That would be really handy. |
OK. I will do it in my free time. There is a tip. Since the |
@AlanSwift that would be great. Looking forward to it. |
This issue is closed due to no further request / updates. Please re-open it if you found necessary. |
@AlanSwift the issue is closed. But no solution is provided yet. |
@AlanSwift wondering do you have any update? |
@AlanSwift without the support to run in a multiple-GPU environment, not sure how this library could be useful for large datasets. Would really appreciate it if you have any updates. |
@nashid This issue is under discussion now. I will give you solution in around a week. Thanks for your important advice. |
@AlanSwift wondering any update? And thanks for looking into it 👍 |
@AlanSwift I will try it out. Wondering when this solution would be part of graph4nlp? Any tentative timeline? |
How to train the
Graph2Seq
model in a multiple-GPU environment? As an example, there is an NMT example here: https://github.com/graph4ai/graph4nlp/tree/master/examples/pytorch/nmtThe model is built here:
https://github.com/graph4ai/graph4nlp/blob/master/examples/pytorch/nmt/build_model.py
Could this be extended to be trained in a multiple GPU environment?
The text was updated successfully, but these errors were encountered: