-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[enhancement]: Feedback on prompt length #1633
Comments
This might be useful: https://github.com/openai/tiktoken |
The token limit is hard coded to 77 in I'm not sure why though. Perhaps @lstein can shed some light on this. |
It's a stable diffusion limit. Other projects have some hacks that kinda get around it, but it turns out that what they're doing is basically the same thing that invoke does with blends, just with less control. As for feedback about token counts, I just submitted a PR to help with that: #2523 |
Gotcha, and I was just reading #1541 as well. The problem of course is that at multiple levels prompt structure isn't compatible. So some sort of translator (to use blends as well?) would be necessary. At the very least, |
I just spent a few hours wondering why my invoke-ai results end up in a wrong location until i moved the location keyword up. 🤦♂️ Very frustrating experience. |
So then what do you think should be done here (vote with emoji)? 🇦 Give a simple indicator (e.g. red) when a prompt exceeds 77 tokens |
Closing this as prompts should be allowed to exceed 77 tokens - However, there could potentially be some visual indicator (hard problem) to inform user of where breaks are added. |
Is there an existing issue for this?
Contact Details
No response
What should this feature add?
I've been writing longer and longer prompts, and sometimes I'll keep trying to increase the weight of a term with no effect, only to find that I have to move it higher in the prompt to make it work. Clearly, I'm going over the token limit. It would be nice if this limit could be increased like in automatic1111, but failing that (or in addition), it would be nice to have some feedback about the length of the prompt so that we can see how close we are to the limit (or how far over, in my case).
Alternatives
Of course, just increasing the limit would be great, but it would still be nice to have a token counter.
Aditional Content
No response
The text was updated successfully, but these errors were encountered: