Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[enhancement]: Feedback on prompt length #1633

Closed
1 task done
whosawhatsis opened this issue Nov 30, 2022 · 7 comments
Closed
1 task done

[enhancement]: Feedback on prompt length #1633

whosawhatsis opened this issue Nov 30, 2022 · 7 comments
Labels
enhancement New feature or request

Comments

@whosawhatsis
Copy link
Contributor

Is there an existing issue for this?

  • I have searched the existing issues

Contact Details

No response

What should this feature add?

I've been writing longer and longer prompts, and sometimes I'll keep trying to increase the weight of a term with no effect, only to find that I have to move it higher in the prompt to make it work. Clearly, I'm going over the token limit. It would be nice if this limit could be increased like in automatic1111, but failing that (or in addition), it would be nice to have some feedback about the length of the prompt so that we can see how close we are to the limit (or how far over, in my case).

Alternatives

Of course, just increasing the limit would be great, but it would still be nice to have a token counter.

Aditional Content

No response

@whosawhatsis whosawhatsis added the enhancement New feature or request label Nov 30, 2022
@whosawhatsis
Copy link
Contributor Author

This might be useful: https://github.com/openai/tiktoken

@dsully
Copy link
Contributor

dsully commented Feb 5, 2023

The token limit is hard coded to 77 in ldm/modules/encoders/modules.py and a few other places.

I'm not sure why though. Perhaps @lstein can shed some light on this.

@whosawhatsis
Copy link
Contributor Author

I'm not sure why though. Perhaps @lstein can shed some light on this.

It's a stable diffusion limit. Other projects have some hacks that kinda get around it, but it turns out that what they're doing is basically the same thing that invoke does with blends, just with less control. As for feedback about token counts, I just submitted a PR to help with that: #2523

@dsully
Copy link
Contributor

dsully commented Feb 5, 2023

Gotcha, and I was just reading #1541 as well. The problem of course is that at multiple levels prompt structure isn't compatible. So some sort of translator (to use blends as well?) would be necessary.

At the very least, 77 should be assigned to a constant in the code. 😄

@GammelSami
Copy link

I just spent a few hours wondering why my invoke-ai results end up in a wrong location until i moved the location keyword up. 🤦‍♂️

Very frustrating experience.

@src-r-r
Copy link
Contributor

src-r-r commented Jan 18, 2024

So then what do you think should be done here (vote with emoji)?

🇦 Give a simple indicator (e.g. red) when a prompt exceeds 77 tokens
🇧 Show a pop-up detailing why a prompt is invalid?
🇨 Something else?

@hipsterusername
Copy link
Member

Closing this as prompts should be allowed to exceed 77 tokens - However, there could potentially be some visual indicator (hard problem) to inform user of where breaks are added.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

5 participants