-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inquiry on Maximum input character text prompt Length #36
Comments
right now the output is limited by the context window of the model (1024) which equates to about 14s. So text should be around that duration (meaning ~2-3 sentences). For longer texts you can either do them one at a time (and use the same history prompt to continue the same voice) or feed the first generation as the history for the second. i know that that's still a bit inconvenient. will try to add better support for that in the next couple of days |
Thanks Man @gkucsko Do one thing please send some code example of using history prompt |
I added a parameter for it. https://github.com/JonathanFly/bark |
Hey @JonathanFly @gkucsko , I have a problem in processing large MAN and WOMAN conversation text. i split the large text into to smaller chunks . but after training the voice is not clear and i received different voices for same history_prompt |
sometimes the history prompt is not respected since a gpt model can technically just come up with a new speaker, so you might need another attempt or two. also some prompts might work better than others. see also here: #21 |
I want to know about max text_prompt length supported by model
and best practice or method to divide the big text into chunks to trained on this model
The text was updated successfully, but these errors were encountered: