-
-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature]: Initial LLM token #5609
Comments
I think I understand this use case but just to confirm is there a reason that this cannot be part of the prompt? |
There is a difference between instructing the model with a prompt like Here is an example for my use case: |
Why not just using the system prompt for this one? Also, where have you seen this before? I have never encountered such feature |
@Etelis What I described was for the system prompt. |
I think anthropic's api has similar feature. |
🚀 The feature, motivation and pitch
Not sure that this has been implemented but could add in initial tokens (i.e. text) to the beginning of the generation process be possible. So basically having the first few tokens be "Sure Thing!", for example, then the model continues generating tokens from that point on.
Alternatives
No response
Additional context
This is in effort to have more control of the model output for returning certain formats and to reduce randomness in the responses.
The text was updated successfully, but these errors were encountered: