Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Request: PA #67

Closed
bs42 opened this issue Feb 19, 2021 · 22 comments · Fixed by #135
Closed

Feature Request: PA #67

bs42 opened this issue Feb 19, 2021 · 22 comments · Fixed by #135
Assignees
Labels
automation Home Automation plugin/integration (ie Alexa, Homeassistant) enhancement New feature or request help wanted Extra attention is needed
Projects

Comments

@bs42
Copy link

bs42 commented Feb 19, 2021

The "Monoprice 6 Zone Home Audio Multizone Controller and Amplifier Kit" has a cool feature which is a 3.5mm input jack for a Public Address system. It mutes the input audio and plays whatever comes in on the line and then resumes the channels back to where they were.

I think something similar, if not the same, would be really useful. It doesn't even need to be a hardware input jack for my use case.

Example use case: An automation in Home Assistant triggered on the doorbell that announces over the entire house that there is a visitor at the front door. If this could be done via API command that simply plays a specific audio file over all the speakers and then restores those speakers back to whatever they were playing before would be excellent.

Optionally if it could be done via ducking and not a complete cut out of the playing audio, just a lowering of the volume by some percent and the audio file played over top. If "ducking" vs "mute" could be controlled via the API command that would be even better.

@linknum23 linknum23 added the enhancement New feature or request label Feb 21, 2021
@linknum23
Copy link
Contributor

We could handle the basic mute, play announcement, unmute sequence using presets #55 (in development right now). What do you suggest for the audio source? Should this be a mic button in the webapp, or is there an existing option that makes more sense? An RCA input could be used for this as well, for integrating existing intercom systems.

Fancier versions are possible in the hardware (the muxes support mixing audio), but would require more advanced firmware and software support that probably does not make sense initially.

@bs42
Copy link
Author

bs42 commented Feb 22, 2021

A mic button in the webapp so you could actually talk into the mobile device app and have it broadcast would be a very useful option. An ability to choose another RCA input or an audio file over HTTP where the location is sent as part of a REST request would be fantastic, with the latter being most useful to me.

The muxing requiring hardware level support totally makes sense and is really a wish list type item and not as important as just the ability to mute the current stream and play a temporary file/source and then unmute the existing stream.

@linknum23 linknum23 added the automation Home Automation plugin/integration (ie Alexa, Homeassistant) label Feb 22, 2021
@linknum23
Copy link
Contributor

One route for adding a microphone to the web interface would be to use this mumble web interface. AmpliPi would also host the mumble server. I have not looked at how configurable this mumble setup is but it could be a good starting point.

https://github.com/Johni0702/mumble-web

@linknum23 linknum23 added this to To do in AmpliPi Mar 3, 2021
@linknum23 linknum23 added the help wanted Extra attention is needed label Mar 11, 2021
@sumnerboy12
Copy link

Would it be possible to use an http endpoint as a source for the announcement? Thinking about hosting a local text-to-voice service for generating spoken announcements from home assistant, and piping them to Amplipi.

@linknum23
Copy link
Contributor

Yup! This is totally feasible. A PA source could really be any digital or analog input to the system, so an http enpoint that accepted either a file or a stream could be integrated pretty easily.

From MicroNova's perspective we would like to determine the use case that helps the gets the most people a working PA system and add that functionality first. I know there is a growing number of HomeAssistant (HA) users, so something that integrates with HA could be that option, especially if it easily ports to other automation systems.

@sumnerboy12
Copy link

I don't use HA personally, so I would prefer to see a generic approach that could be adopted by any automation platform. I use https://github.com/svrooij/node-sonos-tts-polly as an example for sending voice announcements to my Sonos devices.

You can easily build a similar service with local TTS - using pico2wave or the like.

I guess my point is building the PA feature to accept a simple HTTP URL which can point to anything you like might be a good fit? You could make TTS voice announcments, or host some MP3s of a barking dog, which get played if the front gate is opened and no one is home :).

@linknum23
Copy link
Contributor

I was just playing with pico2wav seemed to work decent enough. This could be fairly easy to integrate.

It looks like we would add a couple of endpoints to the API, one to play a sound file, the other to use a TTS engine to write a file and play it.

For playing the file we could use vlc to handle lots of different formats and potentially urls as well.

We could add a preset in the API that sets all of the zones we want to announce on to the vlc audio output.
As for actually sequencing this, we would load the announce preset, play the file with vlc, then when it finished loaded the state that got saved before the announce preset was loaded. This would effectively revert back to the previous state.

@sumnerboy12
Copy link

Would it be easier to just add an API endpoint to play a URL? Then you can use your own local TTS server, an online tool like AWS Poly, or something else. This would also work for playing sound files - and pretty much anything else you want to throw at it?

The idea of building a TTS engine into AmpliPi sounds very useful but you just know that users are going to want to use their own systems, voices, languages etc.

@bs42
Copy link
Author

bs42 commented Jun 14, 2021

I would agree for this specific thing to just have an API endpoint that could play a url (and perhaps the ability to have the actual audio file posted to the api). Having the AmpliPi run TTS seems out of scope for the project.

@bs42
Copy link
Author

bs42 commented Jun 14, 2021

With an API endpoint that could accept media data as well as a URL, the app could have the ability to have a PA builtin that would record the message and then feed the wav/mp3/flac to the API as data and play it. Then one does not have to deal with temporary storage of media files for transient messages.

@linknum23
Copy link
Contributor

linknum23 commented Jun 15, 2021

I like this, lemme see if I can make a prototype of the url version that uses something like cvlc https://www.nasa.gov/mp3/640149main_Computers%20are%20in%20Control.mp3 vlc://quit in the background, then we can expand it to add files and recording capabilities.

@linknum23 linknum23 mentioned this issue Jun 15, 2021
8 tasks
@linknum23
Copy link
Contributor

On #135 we have a working PA prototype on the endpoint api/announce using POST, right now it accepts only a url to specify the file to play in the json payload. Here is the example curl command to actuate it:

curl -X POST "http://amplipi314.local/api/announce" \
 -H "Accept: application/json" \
 -H "Content-Type: application/json" \
 -d '{"media":"https://www.nasa.gov/mp3/640149main_Computers%20are%20in%20Control.mp3"}' \

Behind the scenes this uses something equivalent to the following command to cvlc, the command line only version of vlc: cvlc MEDIA vlc://quit where MEDIA is whatever string was passed in the media field of the request payload. Any url/vlc_uri that works with that command and makes it exit should work with the announce API call.

This will need a little cleanup and a way of notifying the webapp that an announcement is in progress so it doesn't attempt to send api calls during the announcement and so the user knows what's going on during a long announcement. Note: other API calls will stall until the announcement is finished.

I imagine in the future we will add some additional parameters to this such as the volume level to announce at and a way specify other input types like streaming digital audio or analog input.

@linknum23
Copy link
Contributor

I think I am getting pretty close to fulfilling @sumnerboy12's use case and getting closer to fulfilling @bs42's use case. What do you all think? What else do we need in an initial version of this?

@linknum23 linknum23 self-assigned this Jun 16, 2021
@sumnerboy12
Copy link

Sounds really good, thanks @linknum23! One thing that would be very useful, which you mentioned in your PR notes, is the ability to set the volume for the announcement, as well as define which zones should play it. I am thinking for my use-case when the washing machine finishes, I would only want to play an announcement in the living room, not in everyone's bedroom ;).

@bs42
Copy link
Author

bs42 commented Jun 16, 2021

It's looking great. Agreed, the ability to choose between "all" and a list of zones to play in as well as volume controls would be needed for the final product, but this is very useful as is.

@sumnerboy12
Copy link

Just pre-ordered by AmpliPi - with features like this being added, and such a responsive dev team I am all-in!!

@linknum23
Copy link
Contributor

Awesome 😎

@linknum23
Copy link
Contributor

Alright just tested adding zones and groups of zones, along with volume adjustment. Should be able to get those washing machine announcements just to the living room 😄

@bs42
Copy link
Author

bs42 commented Jun 18, 2021

Excellent!

@sumnerboy12
Copy link

Sounds brilliant! Can't wait to get my hands on one of these!

@vszander
Copy link

Thanks, Lincoln! I was able to stream bluetooth audio from my smart phone to my AmpliPi dev image using this method which is compatible with ALSA. The bluetooth stream is going to the default channel on the soundcard. Can we make an alsa an AmpliPi source ?

@linknum23
Copy link
Contributor

@vszander let's talk about Bluetooth support in a separate issue, check out the relevant issue here: #150

@linknum23 linknum23 moved this from To do to In progress in AmpliPi Aug 4, 2021
@linknum23 linknum23 linked a pull request Aug 23, 2021 that will close this issue
8 tasks
AmpliPi automation moved this from In progress to Done Aug 23, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
automation Home Automation plugin/integration (ie Alexa, Homeassistant) enhancement New feature or request help wanted Extra attention is needed
Projects
Archived in project
Development

Successfully merging a pull request may close this issue.

4 participants