Replies: 1 comment
-
In many cases, a model was trained on only a few examples of something and simply doesn't know it well enough to provide consistent results. I think you'd have to test each phrase to see how much it changes images, which would involve a lot of image generation with and without the phrase, with different surrounding prompts and the same seeds. This would take quite a bit of time and would have to be done per phrase, so it might be good if we could easily upload our results to some central site that stores this information. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Problem:
Models don't work well with prompts that they aren't trained with. But we can't tell which.
Like when you prompt for a particular description or a certain artist's style, the output will ignore your intention if the model isn't familiar with the description or artist.
These "bad prompts" are often mixed with other functional prompts and are hard to pick out.
Solution:
To implement a button next to the prompt box. Upon click, bad prompts that aren't trained upon will be highlighted.
Not sure if such implementation is technically feasible though?
Beta Was this translation helpful? Give feedback.
All reactions