Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to use the Graph2Seq model with multiple GPUs? #481

Closed
nashid opened this issue Jan 11, 2022 · 15 comments
Closed

How to use the Graph2Seq model with multiple GPUs? #481

nashid opened this issue Jan 11, 2022 · 15 comments

Comments

@nashid
Copy link

nashid commented Jan 11, 2022

How to train the Graph2Seq model in a multiple-GPU environment? As an example, there is an NMT example here: https://github.com/graph4ai/graph4nlp/tree/master/examples/pytorch/nmt

The model is built here:
https://github.com/graph4ai/graph4nlp/blob/master/examples/pytorch/nmt/build_model.py

Could this be extended to be trained in a multiple GPU environment?

@AlanSwift
Copy link
Contributor

Yes.

@nashid
Copy link
Author

nashid commented Jan 11, 2022

@AlanSwift Can you please point me in the direction of how can I train Graph2Seq in a multiple GPU environment? Do you have any such examples?

@nashid
Copy link
Author

nashid commented Jan 11, 2022

@AlanSwift is there a configuration parameter that I have set? Wondering is there an example?

@AlanSwift
Copy link
Contributor

We are sorry that currently we don't have an example for multiple GPUs. You might refer to the pytorch guidance.

@nashid
Copy link
Author

nashid commented Jan 11, 2022

@AlanSwift wondering do you plan to add an example for multiple GPUs? That would be really handy.

@AlanSwift
Copy link
Contributor

OK. I will do it in my free time. There is a tip. Since the Graph2Seq model takes the GraphData as input which doesn't support scatter for nn.DataParrel, I think the best choice is nn.parallel.DistributedDataParallel. It is quiet tricky and I will provide a demo soon.

@nashid
Copy link
Author

nashid commented Jan 13, 2022

@AlanSwift that would be great. Looking forward to it.

@xiao03
Copy link
Contributor

xiao03 commented Jan 13, 2022

This issue is closed due to no further request / updates. Please re-open it if you found necessary.

@xiao03 xiao03 closed this as completed Jan 13, 2022
@nashid
Copy link
Author

nashid commented Jan 13, 2022

@AlanSwift the issue is closed. But no solution is provided yet.

@nashid
Copy link
Author

nashid commented Jan 21, 2022

@AlanSwift wondering do you have any update?

@nashid
Copy link
Author

nashid commented Jan 21, 2022

@AlanSwift without the support to run in a multiple-GPU environment, not sure how this library could be useful for large datasets. Would really appreciate it if you have any updates.

@AlanSwift
Copy link
Contributor

@nashid This issue is under discussion now. I will give you solution in around a week. Thanks for your important advice.

@nashid
Copy link
Author

nashid commented Feb 2, 2022

@AlanSwift wondering any update? And thanks for looking into it 👍

@AlanSwift
Copy link
Contributor

This part is under discussion now. I can give a temporal solution.
There are two methods for multiple GPUs: 1. nn.DataParallel (DP), 2. nn.DistributedDataParallel (DDP).
The DP solution is here.
The DDP solution is here.

@nashid
Copy link
Author

nashid commented Feb 2, 2022

@AlanSwift I will try it out. Wondering when this solution would be part of graph4nlp? Any tentative timeline?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants