We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
作者你好,请问,解码objects时,论文中的结果对应的threshold是0.5吗?另外有对阈值threshlod的取值作相关对比实验吗?
# ./.utils.py: L39 def __call__(self, text: str, threshold: float = 0.5) -> Set: tokened = self.tokenizer.encode(text) token_ids, segment_ids = np.array([tokened.ids]), np.array([tokened.type_ids]) mapping = rematch(tokened.offsets) entity_heads_logits, entity_tails_logits = self.entity_model.predict([token_ids, segment_ids]) entity_heads, entity_tails = np.where(entity_heads_logits[0] > threshold), np.where(entity_tails_logits[0] > threshold) subjects = []
The text was updated successfully, but these errors were encountered:
paper 里的 threshold 是 0.5 哈,没有尝试不同 threshold 的 ablation study。 在实际使用过程中,对不同场景进行 threshold 的调整可能会所有帮助,比如重 precision 的场景可以把 threshold 调高点,重 recall 的可以调低点。
Sorry, something went wrong.
好的,多谢。
No branches or pull requests
作者你好,请问,解码objects时,论文中的结果对应的threshold是0.5吗?另外有对阈值threshlod的取值作相关对比实验吗?
The text was updated successfully, but these errors were encountered: