Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to reproduce PPL for GPT-Neo-125M using lm-eval #16

Open
pedrogengo opened this issue Dec 13, 2022 · 1 comment
Open

Unable to reproduce PPL for GPT-Neo-125M using lm-eval #16

pedrogengo opened this issue Dec 13, 2022 · 1 comment

Comments

@pedrogengo
Copy link

Hey!

I'm trying to run the following command using the lm-eval cli, but I can't reproduce the results you shared. Did you do something different? If not, do you have any idea where I'm doing wrong?

python main.py \
	--model gpt2 \
	--model_args pretrained=EleutherAI/gpt-neo-125M \
	--device 0 \
	--tasks wikitext \
	--batch_size 1
@bling0830
Copy link
Contributor

May I ask what is the result of the EleutherAI/gpt-neo-125? Did you get an error when using it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants