Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

关于MetaQA的问题 #18

Closed
czh17 opened this issue Oct 14, 2022 · 1 comment
Closed

关于MetaQA的问题 #18

czh17 opened this issue Oct 14, 2022 · 1 comment

Comments

@czh17
Copy link

czh17 commented Oct 14, 2022

您好,对于kbqa以及kgc领域都是非常棒的工作。但是关于MetaQA任务我有几点疑问,若您能解答,感激不尽~
1,在此KGQA任务中,Promp思想体现在哪?(是.json数据集中的'triples'吗。)
2,1-hop下train.json、test.json中每条数据中‘triples’是如何构建的?(因为在原始MetaQA数据中并没有提供question所对应的subgraphs。)
3,您在复现KGT5工作时,是否采用了与原论文相同的kgc pretraining策略?(在框架中好像并没有该阶段所对应的代码。)
再次感谢,还望能解答一下,祝好~!

@CheaSim
Copy link
Collaborator

CheaSim commented Oct 14, 2022

感谢您对我们项目的关注,

  1. 针对KGQA任务,Prompt思想主要在于我们使用 Prompt来构建实体的embedding,但目前实现的方法是基于KGT5 seq2seq的方案,后续准备加入我们论文中提到的使用 question [MASK] 来预测实体token的做法。
  2. 复现KGT5,我们目前考虑使用KGT5公开的预训练权重,后续我们也会增加KGT5的预训练任务(其实只需要构建预训练样本即可,代码几乎不变)

@zxlzr zxlzr closed this as completed Oct 15, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants