Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: backend support for TTS (Bark, etc.) #126

Closed
oliverbob opened this issue Nov 22, 2023 · 30 comments
Closed

feat: backend support for TTS (Bark, etc.) #126

oliverbob opened this issue Nov 22, 2023 · 30 comments
Labels
enhancement New feature or request
Milestone

Comments

@oliverbob
Copy link
Contributor

Is it possible to have a native support for Bark TTS or langchain version of it since we already have that microphone prompt?

@tjbck tjbck changed the title Support for Bark TTS feat: support for TTS (Bark, etc.) Nov 22, 2023
@tjbck
Copy link
Contributor

tjbck commented Nov 22, 2023

Hi, Thanks for the suggestion. Sounds like an interesting idea, I'll see what I can do about it but only after I have every previous feature request out the way. In the meantime, if you could implement a working prototype using python and provide us with implementation examples, that would be sublime. Thanks.

@tjbck tjbck added the enhancement New feature or request label Nov 22, 2023
@walking-octopus
Copy link

Bark is rather unstable, slow, and overkill for an assistant. Piper however seems fine. It also has Python support.

I also wonder if the server or client should be responsible for TTS... It is written in C++, so a WASM port is possible, if desired.

@tjbck
Copy link
Contributor

tjbck commented Dec 5, 2023

I'll be looking into this in the near future! In the meantime, TTS support is already been implemented with legacy web api. Thanks!

@oliverbob
Copy link
Contributor Author

Since we already have the speaker button there, I think we can integrate piper, since its lightweight and fast.

The only requirement is that the server have piper installed via:

pip install piper-tts

Directory structure:

/flask-piper-app

├── app.py
├── static
│ └── welcome.wav

└── templates
└── index.html

Python:

from flask import Flask, render_template, request, send_file
import os  # Add this import statement

app = Flask(__name__)

@app.route('/')
def index():
    return render_template('index.html')

@app.route('/play', methods=['POST'])
def play_text():
    if 'text' in request.form:
        text = request.form['text']
        
        # Generate the audio file
        generate_audio(text)

        # Return the generated audio file to the client
        return send_file('static/welcome.wav', mimetype='audio/wav', as_attachment=False)

    return render_template('index.html')

def generate_audio(text):
    # Use os.system to execute the piper command
    piper_command = f'echo "{text}" | piper --model en_US-lessac-medium.onnx --output_file static/welcome.wav'
    os.system(piper_command)

if __name__ == '__main__':
    app.run(debug=True, port=5000)

Here's the html (which you can convert to svelte):

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>Flask Piper App</title>
</head>
<body>
    <h4>Welcome to Piper App</h4>
    <p>Click "Play Audio" to hear the synthesized speech.</p>

    <!-- Display the text inside a div for user reference -->
    <div id="displayText">
        This is the text that will be read aloud. You can customize this paragraph.
    </div>

    <form id="textForm" method="post" action="/play">
        <input type="submit" value="Play Audio">
    </form>

    <hr>

    <!-- Audio player to play the generated audio -->
    <audio id="audioPlayer">
        <source id="audioSource" src="" type="audio/wav">
        Your browser does not support the audio element.
    </audio>

    <script>
        // Update the audio source when the form is submitted
        document.getElementById('textForm').addEventListener('submit', function(event) {
            event.preventDefault();
            var text = document.getElementById('displayText').innerText;

            // Make an asynchronous POST request to the /play route
            fetch('/play', {
                method: 'POST',
                headers: {
                    'Content-Type': 'application/x-www-form-urlencoded',
                },
                body: 'text=' + encodeURIComponent(text),
            })
            .then(response => response.blob())
            .then(blob => {
                // Create a Blob URL for the audio source
                var blobUrl = URL.createObjectURL(blob);
                document.getElementById('audioSource').src = blobUrl;

                // Load and play the audio
                document.getElementById('audioPlayer').load();
                document.getElementById('audioPlayer').play();
            })
            .catch(error => console.error('Error:', error));
        });
    </script>
</body>
</html>

This way, our model responses will not sound like Stephen Hawking.

@tjbck
Copy link
Contributor

tjbck commented Dec 26, 2023

I'll actively take a look after #216, but piper doesn't seem to support macos. If any of you guys know any workarounds for this, please let us know. Thanks.

Encountering this issue: rhasspy/piper#203

@tjbck tjbck added this to the v1.0 milestone Dec 30, 2023
@tjbck
Copy link
Contributor

tjbck commented Dec 30, 2023

Let's get the ball rolling on this one! Stay tuned!

@tjbck tjbck changed the title feat: support for TTS (Bark, etc.) feat: backend support for TTS (Bark, etc.) Dec 30, 2023
@tjbck tjbck pinned this issue Dec 30, 2023
@diblasio
Copy link

If I may also suggest this feature has an option to use openai tts as well considering there's already a place to input your api key in the UI. Their model sounds more natural for those of us that are attempting to using AI for language learning.
https://platform.openai.com/docs/guides/text-to-speech

This was referenced Jan 16, 2024
@explorigin
Copy link
Contributor

Piper will likely support wasm compilation soon which would allow browser-side generation: rhasspy/piper#352

@oliverbob
Copy link
Contributor Author

oliverbob commented Jan 28, 2024

Piper will likely support wasm compilation soon which would allow browser-side generation: rhasspy/piper#352

I have actually made a pull request that integrated piper in it. But I deleted it since I recall that Timothy said, it is not well supported on his macbook or on mac in general.

If you want, I can make a piper integration again, but it would necessitate to "remove the browser Speech recognition default", unless otherwise some would be kind enough to put a new "piper button" as a sign that I should place it back, it (the new speaker icon) should differentiate between Speech Recognition (the default), and the one to be used for piper (since I'm not very good at svelte, but I'm know quite a lot about javascript). The speech though will not be browser controlled (not wasm yet), but it will read the prompt response, send to server and the server audio generated by piper will be served to the browser.

The only downside is that for longer prompts, the rendered audio file would be larger for the most simplified implementation (without using complex compression algorithm).

Let me know so that I can generate a new pull request should this be still helpful. Alternatively, we can create a piper branch for this repo for research purposes for other developers to look and build on the work. Coz, if I'm not mistaken, OpenAIs whisper server is not free of charge. Its fast but not free.

Piper is better than BARK, since you need a huge GPU to run BARK, and it takes hours on smaller GPUs before bark can talk back to the user text prompt. In Piper, for a message this long (as my comment) for a medium size quality voice, will generate between 1-5 mb. It should be installed where the UI is running. Then it will generate voice back to you from the server between 10 seconds to 30 seconds, or sometimes longer. For longer text, it might require a minute. But if you run piper on a GPU, its as quick as lightning, the only downside would just be "how to compress it" after bark generates the audio file. Im sure there are countless developers here who could figure that out on top of the simplest example, coz for longer text, it reaches more mb, and the voice --model WHATEVER-medium.onnx is quite huge (up to 70MB), which shouldn't be included in the pull request, but can be run (downloaded) after running the piper flask server or bash (which can also be included in the Ollama WebUI run script.

@tjbck
Copy link
Contributor

tjbck commented Feb 6, 2024

OpenAI TTS support has been added with #656! As for the local TTS support, piper seems promising so let's wait until they merge the two blocking PRs.

@oliverbob
Copy link
Contributor Author

Thanks Timothy.

@tjbck
Copy link
Contributor

tjbck commented Feb 22, 2024

Piper library seems to be unmaintained. Looking for alternatives atm, open to suggestions!

@jmtatsch
Copy link
Contributor

jmtatsch commented Mar 1, 2024

Piper works well on Mac also if you build from source and make a tiny change to the CMakelist 🙈

I am pretty sure @synesthesiam will get around to merging those pull requests, piper seems to be his baby after all.
He is just incredibly busy with all the voice assistant integration for Home Assistant.

I played around with bark.cpp and coqui.ai TTS and both are far too slow to be useful.

@justinh-rahb
Copy link
Collaborator

I agree, out of the big three projects for local TTS Piper is probably the best hope we've got.. I really don't understand how this particular niche is so devoid of development, it's one of the most asked-for features in any local AI project.

@synesthesiam
Copy link

Piper is definitely still being maintained! As @jmtatsch said, I've just been busy with other stuff. One thing that's held up development is needing to replace the espeak-ng library due to its license.

I think this niche is fairly devoid of development because very few projects leave the demo stage before the authors are on to the next model/paper. I want Piper to be more of a "boring" technology in the sense that it does a job well without always chasing state-of-the-art.

@justinh-rahb
Copy link
Collaborator

I very much agree with that part of the unix philosophy: do one thing and do it well. Thanks for the status update @synesthesiam 🙏

@yeungxh
Copy link

yeungxh commented Mar 13, 2024

I deployed a TTS/STT on my own server, there's REST api, how can I integrate my own API in this web ui.

@tjbck tjbck unpinned this issue Mar 15, 2024
@jmtatsch
Copy link
Contributor

Can the existing base url for openai tts be made configurable?
I found this adapter
https://github.com/matatonic/openedai-speech
serving an openai tts api with either piper or coqui TTS the back

@lee-b
Copy link

lee-b commented Mar 30, 2024

Can the existing base url for openai tts be made configurable? I found this adapter https://github.com/matatonic/openedai-speech serving an openai tts api with either piper or coqui TTS the back

This looks very promising. The API seems to work well, and it's a similar docker-based setup to ollama. I agree, just allowing tweaking the OPENAI_BASE_URL for audio would go a long way to fully local whisper+xtts-v2 with this.

@lee-b
Copy link

lee-b commented Mar 31, 2024

FYI, I made this work with a local openedai-speech (linked above) on my branch, here:

https://github.com/lee-b/open-webui

It currently requires an extra environment variable and uses a custom docker file and runner script to run the thing, but it works. I'll integrate this better if the core team want to advise on their preferred way to solve some of the issues that I did these things to hack around.

@fraschm1998
Copy link

fraschm1998 commented Apr 3, 2024

FYI, I made this work with a local openedai-speech (linked above) on my branch, here:

https://github.com/lee-b/open-webui

It currently requires an extra environment variable and uses a custom docker file and runner script to run the thing, but it works. I'll integrate this better if the core team want to advise on their preferred way to solve some of the issues that I did these things to hack around.

Any way to fix this?

Got OPENAI_AUDIO_BASE_URL: http://192.168.10.14:8002/v1
open-webui-two  | ERROR:apps.openai.main:404 Client Error: Not Found for url: http://192.168.10.14:8002/audio/speech
open-webui-two  | Traceback (most recent call last):
open-webui-two  |   File "/app/backend/apps/openai/main.py", line 154, in speech
open-webui-two  |     r.raise_for_status()
open-webui-two  |   File "/usr/local/lib/python3.11/site-packages/requests/models.py", line 1021, in raise_for_status
open-webui-two  |     raise HTTPError(http_error_msg, response=self)
open-webui-two  | requests.exceptions.HTTPError: 404 Client Error: Not Found for url: http://192.168.10.14:8002/audio/speech
open-webui-two  | INFO:     192.168.10.14:58846 - "POST /openai/api/audio/speech HTTP/1.1" 500 Internal Server Error
open-webui-two  | INFO:     192.168.10.14:53534 - "GET /_app/immutable/nodes/11.76457ae4.js HTTP/1.1" 304 Not Modified

Server is running:

docker logs openedai-speech-server-1 --follow                                             
INFO:     Started server process [1]                                                                                                        
INFO:     Waiting for application startup.                                                                                                  
INFO:     Application startup complete.                                                                                                     
INFO:     Uvicorn running on http://0.0.0.0:8002 (Press CTRL+C to quit)                                                                     
 > Using model: xtts                                                                                                                        
INFO:     172.24.0.1:38734 - "POST /v1/audio/speech HTTP/1.1" 200 OK                                                                        
INFO:     172.24.0.1:41190 - "POST /audio/speech HTTP/1.1" 404 Not Found                                                                    
INFO:     172.24.0.1:41196 - "POST /audio/speech HTTP/1.1" 404 Not Found                                                                    
INFO:     172.24.0.1:41210 - "POST /audio/speech HTTP/1.1" 404 Not Found                                                                    
INFO:     172.24.0.1:41212 - "POST /audio/speech HTTP/1.1" 404 Not Found                                                                    
INFO:     172.24.0.1:41220 - "POST /audio/speech HTTP/1.1" 404 Not Found                                                                    
INFO:     172.24.0.1:41234 - "POST /audio/speech HTTP/1.1" 404 Not Found                                                                    
INFO:     172.24.0.1:41244 - "POST /audio/speech HTTP/1.1" 404 Not Found                                                                    
INFO:     172.24.0.1:41252 - "POST /audio/speech HTTP/1.1" 404 Not Found                                                                    
INFO:     172.24.0.1:41264 - "POST /audio/speech HTTP/1.1" 404 Not Found                                                                    
INFO:     172.24.0.1:41272 - "POST /audio/speech HTTP/1.1" 404 Not Found                                                                    
INFO:     172.24.0.1:39624 - "POST /audio/speech HTTP/1.1" 404 Not Found                                                                    
INFO:     172.24.0.1:39630 - "POST /audio/speech HTTP/1.1" 404 Not Found                                                                    
INFO:     172.24.0.1:39638 - "POST /audio/speech HTTP/1.1" 404 Not Found                                                                    
INFO:     172.24.0.1:39652 - "POST /audio/speech HTTP/1.1" 404 Not Found                                                                    
INFO:     172.24.0.1:39656 - "POST /audio/speech HTTP/1.1" 404 Not Found                                                                    
INFO:     172.24.0.1:39660 - "POST /audio/speech HTTP/1.1" 404 Not Found                                                                    
INFO:     172.24.0.1:39674 - "POST /audio/speech HTTP/1.1" 404 Not Found                                                                    
INFO:     172.24.0.1:39688 - "POST /audio/speech HTTP/1.1" 404 Not Found

@fraschm1998
Copy link

fraschm1998 commented Apr 3, 2024

FYI, I made this work with a local openedai-speech (linked above) on my branch, here:

https://github.com/lee-b/open-webui

It currently requires an extra environment variable and uses a custom docker file and runner script to run the thing, but it works. I'll integrate this better if the core team want to advise on their preferred way to solve some of the issues that I did these things to hack around.

Fixed with the following, kudos to ChatGPT:

        if not base_url.endswith("/"):
            base_url += "/"

        speech_url = urljoin(base_url, "audio/speech")

The Python urljoin function is used here to combine base_url with "/audio/speech". The urljoin function is designed to intelligently merge two parts of a URL, but its behavior with trailing slashes can sometimes lead to unexpected results. Specifically, if the base URL (base_url) does not end with a slash (/), and the second part begins with one, urljoin might not concatenate the strings in the way you expect, potentially leading to the omission of parts of the path.

@jmtatsch
Copy link
Contributor

jmtatsch commented Apr 4, 2024

I think it would be best if open webui just enables us to set a different TTS base url via ENV variable like OPENAI_TTS_BASE_URL.
Like that users can plug in whatever openai tts compatible server they like and there are no licensing woes.
And it is very little work to do as Openai TTS is already implemented and works beautifully 😍

@jmtatsch
Copy link
Contributor

@tjbck would you be open to the approach taken in https://github.com/lee-b/open-webui
should someone create a pull request?

@hxypqr
Copy link

hxypqr commented Apr 24, 2024

Is there a simple way to change the TTS model to my own now? I can't stand the voice of this robot lol.

@jmtatsch
Copy link
Contributor

since cbd18ec you should be able to set your own openai compatible base url

@UXVirtual
Copy link

In case this helps anyone who is running the open-webui Docker container along with Ollama on the same PC and using openedai-speech you can use the following for configuration:

  • API Base URL: http://host.docker.internal:8000/v1
  • API Key: sk-111111111

host.docker.internal is required since openedai-speech is exposed via localhost on your PC, but open-webui cannot normally access this from within its container.

Note that openedai-speech doesn't need an API key, but setting a dummy one is required due to validation of this field in open-webui

@jmtatsch
Copy link
Contributor

Works wonderfully now.
https://github.com/matatonic/openedai-speech wraps piper, xtts_v2 and parler-tts by the way so there is a good choice of qualities and latencies

@justinh-rahb
Copy link
Collaborator

I'll leave it up to @oliverbob to decide to call this issue fixed or not, or I will close it as such in a few days if we don't hear from them.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests