Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions about the experiments details #31

Closed
speedcell4 opened this issue Sep 14, 2022 · 4 comments
Closed

Questions about the experiments details #31

speedcell4 opened this issue Sep 14, 2022 · 4 comments

Comments

@speedcell4
Copy link

Hi, thanks for sharing the source code.

  1. In Table 2, are these reported numbers the results of the test split or the validation split?
  2. In Table 2, for the RoBbase (LoRA) on the RTE task, the reported result is 86.6, is this a typo? cause it is even much higher than the full-tuning results (delta = 7.9).
@edwardjhu
Copy link
Collaborator

Thanks for your questions.

  1. I believe these are validation numbers since the test set is not public. That is also done by prior work.
  2. Nope, that's not a typo. You can verify it with our checkpoint :)

@speedcell4
Copy link
Author

speedcell4 commented Sep 15, 2022

Wow, thanks for your quick response. I got two more questions.

  1. If I understand correctly, in table 2, the numbers of BitFit were taken from the original paper. But actually, there are some numbers I can not find in the original paper. For example, you mentioned the RoBbase (BitFit) on MRPC task results in 92.7, but I think the original paper reported this number as 92.0 in their Table 2. Could you specify more details about this?
  2. Do you fine-tune the bias terms? cause, I understand you don't require gradients for the weight terms, but I did not see you turn off this for bias terms.

self.weight.requires_grad = False

@edwardjhu
Copy link
Collaborator

You are right. I can't remember where we got 92.7, and it should be 92.0.

Yes, the bias term is learnable here, even though it is not in the code used in our experiments. This seems to be a good idea in practice and has a minimal overhead. The checkpointing utility functions should take care of saving/loading biases. Please let me know if you encounter any issues :)

@speedcell4
Copy link
Author

Thanks~

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants