We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
(1)在根据频次优化时(Frequency Refinement),体现在代码中,是否为: myverbalizer = KnowledgeableVerbalizer(tokenizer, classes=class_labels, candidate_frac=cutoff, pred_temp=args.pred_temp, max_token_split=args.max_token_split).from_file(f"{args.openprompt_path}/scripts/{scriptsbase}/knowledgeable_verbalizer.{scriptformat}"),其中的cutoff是否为实验中提到的阈值呢?不知道理解的对不对。 (2)在few-shot实验中,为何没有将support set的label值remove掉呢? 我看论文中提到的是“ . Our proposed Contextualized Calibration utilizes a limited amount of unlabeled support data to yield significantly better results”, 但是,在实验中是,却将其注释掉了,这里有些不解。 **
The text was updated successfully, but these errors were encountered:
Sorry, something went wrong.
谢谢,我看到了,麻烦了
No branches or pull requests
(1)在根据频次优化时(Frequency Refinement),体现在代码中,是否为:
myverbalizer = KnowledgeableVerbalizer(tokenizer, classes=class_labels, candidate_frac=cutoff, pred_temp=args.pred_temp, max_token_split=args.max_token_split).from_file(f"{args.openprompt_path}/scripts/{scriptsbase}/knowledgeable_verbalizer.{scriptformat}"),其中的cutoff是否为实验中提到的阈值呢?不知道理解的对不对。
(2)在few-shot实验中,为何没有将support set的label值remove掉呢?
我看论文中提到的是“
. Our proposed Contextualized
Calibration utilizes a limited amount of unlabeled
support data to yield significantly better results”,
但是,在实验中是,却将其注释掉了,这里有些不解。
**
The text was updated successfully, but these errors were encountered: