Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Local API, Getting "No support for logits". How to run command line with generate_until only? #1409

Open
lxe opened this issue Feb 7, 2024 · 3 comments

Comments

@lxe
Copy link

lxe commented Feb 7, 2024

I'm trying to run this against a local instance of oobabooga exposing openai-like API:

OPENAI_API_KEY=x lm_eval --model local-chat-completions --model_args model=mymodel,base_url=http://127.0.0.1:5000/v1 --tasks lambada_openai

I'm getting

  File "/home/lxe/lm-evaluation-harness/lm_eval/models/openai_completions.py", line 499, in loglikelihood
    raise NotImplementedError("No support for logits.")
NotImplementedError: No support for logits.

I see in the docs that loglikelihood request type isn't supported, but I can't find anywhere in the docs about how to configure the harness to only do generate_until using the command line interface.

@Svjatoslav
Copy link

Did you manage to solve the problem?

@0-hero
Copy link

0-hero commented Mar 13, 2024

#1174 (comment)

@yaronr
Copy link

yaronr commented May 29, 2024

Is it now possible to run tests on self-hosted llm instances that either do or don't return logits?
I'm getting the same error and can't figure out how to advance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants