-
Notifications
You must be signed in to change notification settings - Fork 143
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve OpenAI API compatibility #216
Comments
This is interesting. I can see us listing either 1) All available local loras or 2) All previously used / cached loras. |
Trying to use lorax for structured generation with different frameworks, the current differentation on the openai structured generation mode is causing issues. I believe it's due to this part which is from the lorax documentation. "Note Currently a schema is required. This differs from the existing OpenAI JSON mode, in which no schema is supported." looking at this, #389 But look at openai requests, the format looks more like
missing some chunks of what is actually expected in lorax, not to mention the correct keys not being set etc.. Lorax would expect this inside the response_format key
This makes one of the great usecases for lorax, namely multi-model systems, harder to just use with frameworks which are already coded for structured/function calling/tool usage with openai compatible endpoints. |
I second the need for this. |
Hi @nidhoggr-nil, you're absolutely correct that LoRAX does not yet support the tool choice / function-calling style of structure generation. That being said, I have a PR in progress to add support for functions/tools that I'm hoping to land in the next week or so. Stay tuned :) |
Fantastic! Looking forward to it. |
Hi @jeffreyftang is there anything we can to do help or test? I noticed a function_calling branch, which I will be happy to test if it is functional. Thanks! |
Hi @codybum, the branch is mostly functional in terms of enforcing the desired output format, but there's still some work to be done for automatically injecting the available tools into the prompt (currently the prompter would need to do so manually). Once that's done and the code cleaned up a bit, should be ready to go. |
@jeffreyftang this is great news. We have been experimenting with structured output and the results are promising. Do you happen to have an example tool and prompt configuration what we could work from? I am happy to give it a go. |
@jeffreyftang given the weather calling example, what would need to be provided in the input prompt? I tried to piece together the following example, which appears to hit some of the tools code, but returns only "{"generated_text":"[\n ]"}" curl http://10.33.31.21:8080/generate \
-X POST \
-d '{
"inputs": "<|begin_of_text|><|start_header_id|>system<|end_header_id|>You are a helpful assistant that can access external functions. The responses from these function calls will be appended to this dialogue. Please provide responses based on the information from these function calls. The functions available to you are as follows:\nget_current_weather\n<|eot_id|><|start_header_id|>user<|end_header_id|>What is the current temperature of New York, San Francisco and Chicago?<|eot_id|><|start_header_id|>assistant<|end_header_id|>",
"parameters": {
"tools": [{"type": "function", "function": {"name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": {"type": "object", "properties": {"location": {"type": "string", "description": "The city and state, e.g. San Francisco, CA"}, "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}}}}}]
}
}' \
-H 'Content-Type: application/json' |
Sorry for the long delay here - there's a PR up for review now: #536 There's an example of how to invoke in the PR description as well :) |
Hi @jeffreyftang I need to try with Mistral like in your example, but with Llama3 8B Instruct, Hermes-2-Theta-Llama-3-8B-32k, and Llama-3-Groq-8B-Tool-Use I get a response like the following: {"generated_text":"[ ]} I will wait until merge then try and build again, perhaps I am doing something wrong in the build process. |
Hi @codybum, thanks for the feedback! It's possible I'm doing something wrong with the prompt modification - I'll take a closer look at some of those models. |
@jeffreyftang Chat Templates (https://huggingface.co/docs/transformers/chat_templating) will handle this already, right? But some models do not support tools in their default Chat Templates (Like Llama 3) |
Feature request
Implement
v1/models
like OpenAI API to list available local loras. This is dependent on #199There is also a hurdle to this: A user may have multiple base models and multiple local loras. I don't know of an effective way to filter loras applicable to currently loaded base. Probably this can be worked on after initial release.
Motivation
Many OpenAI compatible webUI projects expect a list of available models via the /v1/models endpoint, like ollama-webui.
This would only work for local loras as suggested in #199 as it is practically impossible to list all loras from hugginface hub
Your contribution
I'm trying to extend the current OpenAI implementation to support listing of models. I can submit a PR when completed and if the community is interested but it will be useless without #199
The text was updated successfully, but these errors were encountered: