You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
File "/home/lxe/lm-evaluation-harness/lm_eval/models/openai_completions.py", line 499, in loglikelihood
raise NotImplementedError("No support for logits.")
NotImplementedError: No support for logits.
I see in the docs that loglikelihood request type isn't supported, but I can't find anywhere in the docs about how to configure the harness to only do generate_until using the command line interface.
The text was updated successfully, but these errors were encountered:
Is it now possible to run tests on self-hosted llm instances that either do or don't return logits?
I'm getting the same error and can't figure out how to advance.
I'm trying to run this against a local instance of oobabooga exposing openai-like API:
I'm getting
I see in the docs that
loglikelihood
request type isn't supported, but I can't find anywhere in the docs about how to configure the harness to only dogenerate_until
using the command line interface.The text was updated successfully, but these errors were encountered: