You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for developing/opening code.
I have a question about the original paper.
In the original paper,
To be specific, we sample part of Wikidata which contains 5, 040, 986 entities and 24, 267, 796 fact triples.
Is there any plan for giving description about sampling facts and training embeddings?
For example, how long it takes, how to tune, and so on.
The text was updated successfully, but these errors were encountered:
izuna385
changed the title
Details about Training pre-trained embeddings using TransE
Details about training pre-trained embeddings using TransE
Jun 27, 2019
According to the entities occurring in the pre-training corpus, we sample a subgraph which consists of these entities. Then, we use TransE to train these embeddings which takes about several hours with 8 CPUs.
Thanks for developing/opening code.
I have a question about the original paper.
In the original paper,
Is there any plan for giving description about sampling facts and training embeddings?
For example, how long it takes, how to tune, and so on.
The text was updated successfully, but these errors were encountered: