Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[LLaMA Adapter V2] Evaluation on multiple choice questions. #139

Closed
xiujiesong opened this issue Dec 29, 2023 · 1 comment
Closed

[LLaMA Adapter V2] Evaluation on multiple choice questions. #139

xiujiesong opened this issue Dec 29, 2023 · 1 comment

Comments

@xiujiesong
Copy link

xiujiesong commented Dec 29, 2023

Hi,

Thanks for your great work. I am wondering how can we evaluate LLaMA Adapter V2 on multiple choice questions. I expect the model can generate the letter (A/B/C/D) of the corresponding choice, but it often does not generate the letters even if I tell it to do so. Do you have any good solutions?

@csuhan
Copy link
Collaborator

csuhan commented Dec 30, 2023

hi @xiujiesong ,I think you can construct a prompt like:
[Question] A. xxx. B. xxx. C. xxx. D. xxx. Answer: [A/B/C/D].

Then you can test the loss of each answer (similar to the approach of Perplexity) and choose the one with lowest loss.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants