-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add flag to disable training on GPU for TEDPolicy #10897
Comments
Exalate commented: WashingtonBispo commented: Hello There. I have interest on work on this issue. |
Exalate commented: melindaloubser1 commented: @WashingtonBispo Would you like to contribute a PR for this feature or do you mean you're interested in future work on it? |
Exalate commented: WashingtonBispo commented: I intent to contribute a PR. I will work on it this weekend. |
Exalate commented: melindaloubser1 commented: Great, thanks! Please tag me in the PR and I'll make sure it gets reviewed |
Exalate commented: WashingtonBispo commented: Done. @melindaloubser1 |
Exalate commented: melindaloubser1 commented: For anyone who needs a workaround until the PR is merged, you can train NLU on GPU and core on CPU separately, by exporting
|
Closed by #10944 |
What problem are you trying to solve?
Since the bump to Tensorflow 2.6, training on TED on GPU is significantly (in one case 4X) slower than before. CPU training time is not affected to the same degree.
Because it is still advantageous to use GPU for DIETClassifier, it's desirable to disable training on GPU for TED only to save time.
What's your suggested solution?
A parameter for TED like
use_gpu
that controls whether available GPUs are used or not.Examples (if relevant)
No response
Is anything blocking this from being implemented? (if relevant)
No response
Definition of Done
The text was updated successfully, but these errors were encountered: