Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding voice feedback; also audio problems #78

Closed
beto2909 opened this issue Feb 14, 2016 · 8 comments
Closed

Adding voice feedback; also audio problems #78

beto2909 opened this issue Feb 14, 2016 · 8 comments

Comments

@beto2909
Copy link

So it ocurred to me to add short voice feedback. I found responsivevoice.js and it's easy to use. I am having problems with my audio in my R-Pi. I have the pi connected to a tv and online says to configure the /boot/config.txt file and add/uncomment a line to force the audio output to the hdmi but it's not doing anything. I've looked everywhere and tried everything and I can't get my audio to work. Does anyone have the same issue?

This is some testing I've done with responsive voice that works in my pc.
Basically I just checked that the service worked. I added the response after the command "My name is *name" is executed. I also attached the service file, and this is the responsivevoice script: "https://code.responsivevoice.org/responsivevoice.js".

controller.txt
ttsService.txt

@3y3c0ugh
Copy link

I recently switched to the search branch to test the functions and see if my mic setup would still work correctly. I got everything working, but just like you I'm having an issue setting up the sound. I went into /boot/config.txt and uncommented the section regarding pushing through HDMI, I believe it was hdmi_device=2 or something. However upon reboot I still got not sound coming through the TV with YouTube videos. However, I found that when I did "sudo raspi-config" I could navigate the menu and change the audio settings to force HDMI. When I rebooted from that, I got sound on my pi and could watch videos and get sound.

However, when I start the mirror, and use the search function, I get the video requested, but don't get sound. So it worked for overall sound, but not sound being played through the mirror.

@beto2909
Copy link
Author

I sutil need to test the youtube search feature, I just downloaded the repository again. However I already tried hdmi_drive=2 and basically followed the audio config instructions in the r-pi page: https://www.raspberrypi.org/documentation/configuration/audio-config.md

I also saw somewhere that they solved it by adding the /.asoundrc file. I'm just wondering if that has something to do with it since I had to edit that file to make my microphone work. This is what I'm talking about: https://bbs.archlinux.org/viewtopic.php?id=173709

The only Thing I can make the audio work is running the "/.hello_audio.bin 1" other than that, I can't make it work.

@beto2909
Copy link
Author

This is what i get when i run the mirror. I know the problem is not the mirror but im sure is something that has to do with ALSA.

12752010_10207572983154282_214465800_o

@evancohen
Copy link
Owner

I considered adding the capability for the mirror to "talk back" to you for a while. Speech is a relatively new interface for technology, and one of the biggest problems is feedback for the user. It's my opinion that the best way to do this for the mirror is to provide that feedback in the form of a UI change (ie: the map pops up, a reminder is added, etc) and anything more than that is actually obtrusive.

Imagine saying "turn on the lights" and then the lights turn on and a voice says "ok, turing on the lights". What's the point of having the voice? It's redundant. You already know the mirror understood you because the lights turned on.

The only time I see synthesized voice feedback as useful is when you are asking a question.
When you are having a conversation with someone and you ask them a question ("what is the square root of 9") they will respond verbally ("3") - but if you ask them to do something like "turn on the lights" they are going to perform that action, they aren't going to tell you that the are turning on the lights as they do it.

Sorry if this feels a bit like a rant (it's definitely the longest response I've ever written on GitHub before), but it's been my philosophy since I started this project. Prove me wrong and I'd be more than happy to add voice feedback!

As for your audio problem, there is already an issue filed for this #67 so I'm going to close this out. Feel free to drop into the gitter chat if you have any more questions :)

@SenadM
Copy link

SenadM commented Feb 15, 2016

The voice feedback could be great with the wolfram API

@shoaibuddin
Copy link
Contributor

That's what I exactly did with say js node module. Tried on Mac. Waiting for Ras 3. Will try on that too.

@shoaibuddin
Copy link
Contributor

Try this on PI:
sudo apt-get install festival festvox-kallpc16k

@evenstensberg
Copy link

I agree with @evancohen. Alltough we could do better. Voice could turn out to be a right way to go if we have the context-architectural reason to. Either if its a middle-ware or out of the box as a core functionality. In terms of middle-ware, we could pre-fetch a voice-response for slow wi-fi, and cache this if possible. Means that you'll get a somewhat progressive feedback based on your request.

Say: "Display me a map for NYC, US"
Res: "Okay, $someStringValue" ( Or anything similar)
Say: "Here is your desired map"

Note that we can discard the last line if content speaks for itself.

For pure context/functionality, it could work in terms of pre-fetch responses mixed with invalid search queries and if the user isn't providing enough context for a map to display. Say if you only sayNew York and you'd get a response that's asking for which place/state/w-e/.

That being said, you don't need a node lib for this to actually work. You can simply cache a mp3 file, and go full-audio with no Neutral Networking implemented, and you'd save some valueable time.

Voice would be awesome and you'd get a interactive feeling as well as an "futuristic" feel, leaving you feel like Iron-Man. However, downsides such as size of project & if we do really need it could be some key aspects to target here. I'm not sure if implementing this as of straight into the master/core branch would be the fit but rather as an API or as an external npm package.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants