We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
在阅读代码中发现,NeuralKG中提供的默认的TransE模型不是与TransE模型的原始论文中不完全相同的,如loss函数采用了一种基于logSigmoid的方式,这与原始TransE基于margin的方式不同。
请问对原始论文中模型的这些改变,对模型性能的影响有多大?除此以外,NeuralKG中是否提供与原始论文中一样的TransE模型?
The text was updated successfully, but these errors were encountered:
非常感谢您的建议,我们已经在github仓库中已经更新了对于MarginRankingLoss的支持,并会在后续正式发布。
我们发现对于KGE模型,通常采用self-adversarial loss的方法要比marginranking loss方法效果要好,以下文章也可以证明这个观点: CAKE: A Scalable Commonsense-Aware Framework For Multi-View Knowledge Graph Completion KGTuner: Efficient Hyper-parameter Search for Knowledge Graph Learning
希望这会对您有所帮助
Sorry, something went wrong.
No branches or pull requests
在阅读代码中发现,NeuralKG中提供的默认的TransE模型不是与TransE模型的原始论文中不完全相同的,如loss函数采用了一种基于logSigmoid的方式,这与原始TransE基于margin的方式不同。
请问对原始论文中模型的这些改变,对模型性能的影响有多大?除此以外,NeuralKG中是否提供与原始论文中一样的TransE模型?
The text was updated successfully, but these errors were encountered: