Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using harness with a local internal API #1177

Closed
asimokby opened this issue Dec 20, 2023 · 4 comments
Closed

Using harness with a local internal API #1177

asimokby opened this issue Dec 20, 2023 · 4 comments

Comments

@asimokby
Copy link

Hello there,

Is there a way to run the evals with a local internal API URL of a model? Do I have to create a class under lm_eval/models and implement some specific models (if so what are they)? Any idea where to start?

Thank you.

@haileyschoelkopf
Copy link
Contributor

Hi! Assuming that your local API can be called via the same interface as OpenAI via setting openai.OpenAI(base_url=base_url), This will be addressed and documented in #1174 !

@haileyschoelkopf
Copy link
Contributor

ChatCompletions support for arbitrary API providers is now merged. The same changes to OpenAIChatCompletionsLM will need to be ported to OpenAICompletionsLM to support non-chat models in this way--this is planned to be added asap, but if you'd like to take this on in the meantime we'd welcome a PR as well!

@StellaAthena
Copy link
Member

Also, if your internal API is different from the openai one you can still implement custom support! You can use that file as a guide... basically you need to implement the three request types (loglikelihood, loglikelihoodrolling, and generate_until).

@haileyschoelkopf
Copy link
Contributor

+1, see https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/model_guide.md for a walkthrough on adding custom support.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants