Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

llama.cpp local eval support #686

Open
lbux opened this issue Apr 17, 2024 · 0 comments
Open

llama.cpp local eval support #686

lbux opened this issue Apr 17, 2024 · 0 comments
Labels
enhancement New feature or request

Comments

@lbux
Copy link

lbux commented Apr 17, 2024

Is your feature request related to a problem? Please describe.
Ollama is a good solution for local evaluation for projects that already use it. If a project uses pure llama.cpp instead, it seems redundant to have to use both (one for generation, one for eval).

Describe the solution you'd like
llama.cpp has a web server that supports OpenAI format which should be compatible with litellm

Describe alternatives you've considered
As mentioned Ollama works, but I don't want to have to download 2 models when I can share the models if I would be able to use llama.cpp for everything.

Additional context
Add any other context or screenshots about the feature request here.

Thank you for your feature request - We love adding them

@lbux lbux added the enhancement New feature or request label Apr 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant