Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Excellent work, looking forward to following up with further research! #1

Closed
caiyuchen-ustc opened this issue Dec 26, 2023 · 3 comments
Assignees
Labels
question Further information is requested

Comments

@caiyuchen-ustc
Copy link

caiyuchen-ustc commented Dec 26, 2023

In sections 5.1 and 5.2, I have a few questions

  1. In the counterfact dataset, we should not only compare the accuracy of top k, but also consider the ES,EM metrics mentioned by Meng, which may increase the probability of wrong words while increasing the probability of correct words (The native language of Danielle Darrieux is English. Danielle Darrieux's native language is French).
  1. I think if a specific LASER is performed on each dataset individually, although it will significantly improve the prediction performance, there is a risk of overfitting? I think a uniform set of hyperparameters should be found to reduce the RANK to demonstrate the effectiveness of this approach.
  2. Nevertheless, I think this is a very worthwhile endeavor and it gives us a very valuable insight into the inner workings of the transformer.

Reference:
Kevin Meng, David Bau, Alex Andonian, and Yonatan Belinkov. Locating and editing factual associations in
GPT. Advances in Neural Information Processing Systems, 36, 2022.

@caiyuchen-ustc
Copy link
Author

Looking forward to the author's reply!

@dkmisra dkmisra added the question Further information is requested label Dec 27, 2023
@dkmisra
Copy link
Collaborator

dkmisra commented Dec 29, 2023

Thanks a lot for your comment.

  1. Evaluating the other metrics is on our TODO list. I think it shouldnt be hard to compute them and we certainly plan to do so.

  2. Your point about finding LASER hyperparameters that work across a range of tasks is also very relevant. One question would be, what model selection criterion does one use? We can take 20% of each datasets and use that as a validation set and compute an aggregate metric. But what metric would make most sense? One can compute average accuracy but the scaling factors and difficulty might be different. Or, one can compute a score that measures the number of datasets we outperform.

    We did notice a common trend which is that typically one needs to do LASER on later MLP layers with a significant amount of reduction to get the best performance. This is mostly but not always true for the results in Table 1 (see selected hyperparameters in Table 3). I recently tested this hypothesis for the Phi-1.5 LLM on the Counterfact dataset. The base model gives an 10% accuracy and I tried just two choices for LASER: the last transformer layer, keeping just 0.01 of the original rank, and using the first or the second layer of the MLP. This choice is guided by the above pattern in Table 3. And this already gave a performance boost of 16%. Therefore, I am somewhat optimistic that we can find a general choice for LASER that outperforms the base model in the majority of settings. The code for running the Phi-1.5 experiment is here: https://github.com/pratyushasharma/laser/blob/main/src/intervention_phi15_counterfact.py

Thanks a lot for your comments and the reference to the Meng et al paper which we cited and which we really like. Please let us know if we can help you in using this codebase.

@dkmisra
Copy link
Collaborator

dkmisra commented Jan 4, 2024

Closing this issue. Please feel free to reopen it up, or send us an email. We will update the README.md once we have more results (adding @pratyushasharma to this thread as well).

@dkmisra dkmisra closed this as completed Jan 4, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants