-
Notifications
You must be signed in to change notification settings - Fork 92
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
a experiment about meta-test #46
Comments
Thanks for your reply. 2022-09-20 22:57:35 INFO: - span_f1 = 0.7218073781712385 |
I understand the performance will drop, but it performs poorly. |
Sorry, I made a mistake earlier. You can't direct remove #447 in the type classification stage, which has some logit to generate the type embedding. |
Thanks. The new result seems to be correct. 2022-09-24 20:43:14 INFO: - ***** Eval results inter-test ***** |
I am very sorry to interrupt again. Why is the performance in 5-shot worse than 1-shot after ablating fine-tuning in meta-test? |
Hi @wjczf123, this may be reasonable, although we have not done the corresponding ablation experiments on 5shot. First of all the 5shot and 1shot datasets cannot be compared in parallel, they are both just a sampled subset of Few-NERD. Of course, according to our experimental results on inter 5-1 and inter 5-5, it seems that the 5shot results are better. Secondly, we found in our experiments that more fine-tuning steps are needed for inter 5-5 and inter 10-5 in the meta-test. Removing the fine-tune may have a greater impact on 5shot. Hope this helps. |
Thanks. Hope you have a good day. |
Hi. I delete Line 384-385 and Line 447 of learner.py to avoid fine-tuning in the support set during the meta-test? Is this right? Thanks.
The text was updated successfully, but these errors were encountered: