We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
您好,我有一些问题想向您请教 我在阅读modeling.py部分的代码时发现,您的代码(个人理解) Roberta生成原始输入x的嵌入,又用随机嵌入和线性层生成prompt部分的嵌入 利用torch.where进行拼接(原始输入的嵌入+prompt部分嵌入) 再输入Roberta生成隐藏状态
请问为什么要这么做呢?
The text was updated successfully, but these errors were encountered:
你好,不知道你的问题解决没,我也是这里有疑问 我感觉他的随机嵌入对应的他文章图中的learnable continuous token 但是他后面使用torch.where时根本没嵌入上呀 因为他的input_flags全为0,得到的inputs_embeds和raw_embeddings是一样的(在tacred数据集上测试的)
Sorry, something went wrong.
知道了,在data_prompt.py文件里 # prompt = [tokenizer.unk_token_id, tokenizer.unk_token_id] + \ prompt = self.temp_ids[rel_name]['mask_ids'][0] + e1 + \ self.temp_ids[rel_name]['mask_ids'][1] + \ self.temp_ids[rel_name]['mask_ids'][2] + e2 # + \ # [tokenizer.unk_token_id, tokenizer.unk_token_id] 被注释掉了
# prompt = [tokenizer.unk_token_id, tokenizer.unk_token_id] + \ prompt = self.temp_ids[rel_name]['mask_ids'][0] + e1 + \ self.temp_ids[rel_name]['mask_ids'][1] + \ self.temp_ids[rel_name]['mask_ids'][2] + e2 # + \ # [tokenizer.unk_token_id, tokenizer.unk_token_id]
@1159007075 @1120161807 @THUCSTHanxu13 大家好,我想请教一下,为什么prompt-tuning里只使用roberta的word_embedding,不使用position_embedding呢?
raw_embeddings = self.model.embeddings.word_embeddings(input_ids)
(虽然我看到最后计算logits的时候,算的是dot-product,可能是因为这个才使用的)但是这样子roberta在编码文本的时候不就失去了时序特征?
No branches or pull requests
您好,我有一些问题想向您请教
我在阅读modeling.py部分的代码时发现,您的代码(个人理解)
Roberta生成原始输入x的嵌入,又用随机嵌入和线性层生成prompt部分的嵌入
利用torch.where进行拼接(原始输入的嵌入+prompt部分嵌入)
再输入Roberta生成隐藏状态
请问为什么要这么做呢?
The text was updated successfully, but these errors were encountered: