You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
Most models contain the template definition for the OpenAI chat endpoint in their tokenizer_config.json file on HF.
Redeclaring the template in LocalAI seems a bit counter-productive when it could be automagically applied.
Describe the solution you'd like
Model definition contains the following parameter:
template:
use_tokenizer_template: true
If this parameter is set, on OpenAI chat endpoints (outside of the scope of functions), LocalAI doesn't even attempt to look at templating the prompt and passes the messages slice down the line to the vLLM python backend, which will apply the templating.
Additional context
PR incoming.
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe.
Most models contain the template definition for the OpenAI chat endpoint in their
tokenizer_config.json
file on HF.Redeclaring the template in LocalAI seems a bit counter-productive when it could be automagically applied.
Describe the solution you'd like
Model definition contains the following parameter:
If this parameter is set, on OpenAI chat endpoints (outside of the scope of functions), LocalAI doesn't even attempt to look at templating the prompt and passes the
messages
slice down the line to the vLLM python backend, which will apply the templating.Additional context
PR incoming.
The text was updated successfully, but these errors were encountered: