Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

slow training with GPU #9

Open
nitishajain opened this issue Sep 4, 2021 · 2 comments
Open

slow training with GPU #9

nitishajain opened this issue Sep 4, 2021 · 2 comments

Comments

@nitishajain
Copy link

Hello,

Thank you for providing the code of your paper. As per the instructions, I am running the code for Version 2 of RNNLogic with emb. While the training is running as expected, it is very slow for both wn18rr and FB15K-237 datasets on my GPU server.
Could you inform about your experimental setup for these experiments in terms of the underlying hardware and the expected run times? I could estimate the running times for my setup from this information.

Thanks!

@chenxran
Copy link

chenxran commented Jan 10, 2022

Hello, I am facing the same problem when trying re-implementing RNNLogic using the code in the main branch. I found that using multiprocessing package to concurrently train the model for each relation cannot speed up since a single process will cost almost 50% of my CPU (Intel Xeon Gold 5220). Did you face the same problem? Approximately how long did you cost to train on FB15k-237 or other much smaller datasets like umls/kinship?

@mnqu
Copy link
Collaborator

mnqu commented May 2, 2022

Thanks for your interest, and very sorry for the late response. We have refactored the codes, and the new codes are in the folder RNNLogic+, which are more readable and easier to run. You might be interested. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants