-
Notifications
You must be signed in to change notification settings - Fork 235
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Default rooms #27
Comments
It could also use the default room when when the room slot comes in undefined (which it will if you don't say 'in') or when the room is garbled because alexa didn't hear you correctly. |
You can do something like this (specific to me) function roomOrDefault(roomName) { Then replace intent.slots.Room.value with roomOrDefault(intent.slots.Room.value) everywhere |
I just finished working on musicSearch for node-sonos-http-api and have been testing it with echo-sonos. It supports Apple Music, Spotify, Deezer, and the local music library. What I'm finding while testing is that the amount of utterances is growing rapidly to the point that it is breaking the model. As a result, I've been thinking about ways to greatly trim down the number of utterances. I too think that there should be a means for setting a default room as well as a default service and include utterances for changing them. I would create a slot for the different service types including apple, spotify, deezer, library, siriusxm, pandora, and presets. I'm wrestling with where to store the defaults so that they can easily changed by an utterance. I'm thinking DynamoDB or S3. I'm also talking to the Echo product manager about adding an Echo ID to the json message that is passed to Lambda so that the defaults could be set differently for different Echos/Rooms. (My company spends a TON with AWS so I have access to all of the PM's :) Should I pursue making these changes? |
I may be in the minority but I don't see myself using defaults on a per-room basis. I tend to pick music for the room (or rooms) I'm in. The room isn't the pivot point for me - it's the music and by correlation, the music service. Further, I don't subscribe to multiple services and when I talk with some other big music listeners, they've invested heavily in one service or another and do not tend to use multiple services as normal practice. Some use Pandora One. Others are big users of Spotify. Still others have adopted Apple Music. In each case, they have their "go to" service. The exception is that a few of these contacts have big legacy libraries on a computer. I guess all of this data points to the desire for setting a default service. However, default rooms is less valuble for the installations I work with. |
That is awesome you can get to the product managers! I would love to see an echo ID sent so your skills can know which device the request came from. If only that could also be used with the built in smart home skills too so you could say "turn on lights" and depending on which echo you were talking to would turn on different lights based on the room it's in. Would also be useful for "Tell Sonos to play" and control different speakers depending on the room. I have skills named "TV pause", "TV play", etc and would be able to set those up for multiple TV's/rooms with an echo ID to work with. The only other thing I'd love is if they'd let you assign a skill as the default interface to open automatically when you say "Alexa". If the original interface is not default then you could do something like "Alexa, ask Amazon ..." I built a skill for work that interprets all aspects of our schedule and put an echo in our lounge. But people always screw up asking it stuff directly "Alexa, who works tomorrow" vs saying "Alexa, ask the schedule who works tomorrow". As far as default room, for those of you who like to play around with it, I copied the Sonos core skill into 5 duplicate skills. The skills all have invocation names the same as my speakers. So I have skills named "patio", "living room", "garage", etc. Each skill is coded via their options.js to send commands to that speaker only. Then I can say "Alexa, play pandora in the living room", "Alexa, tell the patio to pause", "Alexa, volume 40 in the garage", "Alexa, tell the living room to join the kitchen", etc... I think it just makes it more convenient to give commands. |
Bradalane, I agree on the services except for maybe many having two, Pandora and then another main streaming service. I also agree that you tend to use one more than any. I actually want three, SiriusXM, Pandora, and Apple Music. And then there are the presets. As far as rooms go, the rub comes when you have multiple Echos and multiple Sonos and really would like to by default, associate a room to a specific Echo (which I can't do yet). Short of that, I still tend to play music in one room more than another and don't constantly wander around the house playing different music. I'm thinking it should be more like a multi-room stereo tuner where you change service (FM, XM, CD, TV, ...) and can change the room setting too (eventually room by room). To do this there should be utterances: Change service to X Which would change the defaults. You could then simply say: Alexa, Ask Sonos to Play Coldplay jratliff681, Alexa, bla, bla AND Sonos, play Coldplay What I'm really driving towards is being able to have the Echo NATIVE simple music experience with Sonos. I'm thinking that it may be better to just fork this project to something like Echo-Sonos-Plus and implement this new behavior. That way I don't disrupt the current behavior. Thoughts? |
Forking initially does make sense. Then, once it's been "in the wild" for a few months, it can be decided if merging is desirable. I have only one Echo so I didn't consider that option. I can see your point. I do have nine "zones" for my Sonos. |
Setting a default room would be useful for me as well and reduce utterance complexity. It's currently failing the wife test. Setting the default to the last used room would also make the flow easier. A command like "Alexa, tell Sonos I'm in the kitchen" as @jplourde5 suggests would be nice. |
I hear you on the wife test. My problem also, and one of the reasons I want more simplicity. I'm also running out of breath with some of these utterances ;) I'll go the forked route and shoot for a default behavior that mimics what is here now. The main problem I have now is with the Play "something" preset command because it collides with where I'm headed. But I could make Presets the default service and effectively have the current behavior as a default, sans all of the other music service capabilities that don't exist in standard branch now anyway. |
@jplourde5 - I'd vote for forging ahead in the same project, since merging the forks over time becomes problematic and most people seem to want some variant of all of these features anyway. I would suggest:
Setting up lots of invocation names would be a great feature if Echo could support it easily. Might be something to document in "Optional Setup" someplace for the brave. |
Will do. I'd prefer not to fork it if we can avoid it. |
I finished the work and will begin testing tonight. I'll provide links to my branches for others to test once I finish my initial testing, hopefully before the weekend. The changes will add support for Apple Music, Spotify, Deezer, Sirius XM, and the local music library. Default behavior will be as echo-sonos works now. Advanced mode will allow users to change services and rooms, allowing for easier utterances. |
Okay, I've done some preliminary testing and things seem to be working. You can get the releases at: node-sones-http-api Use the below release instead of what is is specified in the documentation echo-sonos I'm sure that I'll be fixing, tuning, and updating the releases after further testing. Let me know if you run into any problems. Good luck and enjoy! |
OK, I'm "up and running" with the new code. things seems to be good thus with one exception. I'm performing the URL tests ... In my node "server" command window, I am seeing "undefined" output to the command window after each command is issued. eg1: I did the "zones" URL and I get back the xml in the browser and the command window for the server spits out "undefined". I'm not sure where the message is coming from. |
FYI @jplourde5 - the echo-sonos 'release' for v2.0.beta does not contain the full set of files. I was using git to fetch the files from master and it was missing some things like the slots files for XM. However, the master branch appears to be ahead of the beta branch. So, I then followed your instructions and grabbed the tar.gz file. It too was missing some files. I've pulled the various files from the different branches and [hobefully] have a simialr source tree to what you are testing. FYI2: Similar mismatch seems to have occurred with echo/intents.json which is missing 'PlayIntents' so the lambda test fails. Changing the test to use 'PlayPresetIntent' succeeds. BTW: Nice work! I hope some of this testing will be helpful. |
Did you get it working? You may just want to go to my branch directories and pull down the files. Maybe there are some links back to the main branches. I'll see about the undefined. It may just be a leftover console.log call. BYW, I may have an issue with SuriusXM that I'm looking at now. Keep me posted |
Found and fixed the SiriusXM issue. Update the siriusXM.js file with my latest copy in nod-sonos-http-api if you are using SiriusXM. |
there may be an issue with enabling "advanced options" ... steps:
Alexa Skill reported: "the remote endpoint could not be called, or the response it returned was invalid" I addes some additional logging and have the following ...
I did add the Suggestions? |
@jplourde5 , when you say "branch directories", I believe that is what I attempted. However, there are some updates in |
For the missing intents and slot files, did you happen to use the links in the read.me? If so, that is the problem there. They all link back to the main branch which is what they need to point to once I submit a pull request. I'll change them to point to my branch in the mean time. If the "undefined" that your seeing is showing up in a repeated fashion in some sort of frequency by itself, I believe that is something in the main code of the beta branch. You have to stick with the beta branch because it is built using Promises and the main branch is not yet until jishi replaces the main with the beta. I've been using the beta code for a while and haven't really had any problems with it. The "undefined" message is puzzling, but it doesn't seem to hurt anything. Let me know if you figure out what is causing it. |
Okay, I fixed the links to the README so that they point to my file. You are going to have a lot of problems if you used something different. Redo the "Create The Alex Skill..." section, steps 4 and 5 if you attempted the earlier links. This may clear up your problems. |
Thanks. I was not using the links. I was using the instructions to work with the files in the fetched source. Anyway, Here is what I have [re] done ...
When I test, using a local URL in a web browser, the request executes correctly and the server console spits out The As you said, since it seems to be working at this step, I'm moving on and I'll circle back later. Update: After building and configuring the lamda function, I attempt to run the script from I changed |
Do you see musicSearch.js in your lib/actions directory? Also, have you changed to the node-sones-http-api directory and executed npm install --production? This loads all of the dependencies into the node_modules directory. If you don't see a node_modules directory in the api directory where you should also see the lib directory, then you haven't installed the dependencies. Once installed, you should see a number of directories in node_modules, including a fuse.js directory. If so, then you should have all of the needed dependencies in place. |
...cont. Also, does http://localhost:5005/zones in your browser spit out a bunch of zone information? |
...cont. One other point, you need to follow all of the instructions very carefully, line by line. Missing a step is usually what causes a problem. Granted, the fact that we are using my interim builds does create some new ways for things to go wrong ;) If you want to give me your number, I'll call and try to help you through it. |
sorry if I have sounded 'dumb'. trust me I'm not :-) ..._ well, not most of the time_ I followed the steps - including verification at each step. Yes, At this point, I'm ignoring the I have hit an odd problem which does not appear to be echo-sonos. The Skill works from the Service Simulator by typing in a command such as |
...cont again. Reading further back regarding the table problem, I deleted my echo-sonos table and I'm having the same problem recreating the table. Looking at it now. |
Update: It seems some utterances are snatched or unrecognized by Amazon servers rather than handed to the Skill. I can turn volume up/down. I cannot start an artist, track, station, or radio. If I ask |
Added back the utterances for the MusicRadioIntent |
How's it going today? |
mixed results but it looks like the only way to get things right is to always start from the very beginning and nuking everything ...
Using the external URL As I said, I'm just going to start all over again. To make sure I have the correct code, which branch do I need to fetch for |
I'll do a quick install of my test distribution and see if I have similar problems. The sirius-channels.json file should not cause a problem. Only the .js files are "supposed" to get loaded. Thanks for going through all of this trouble. It is tedious and not so intuitive. But it works great once it is working. |
no problem. I read the I created a |
Just reinstalled and works fine. Quick instructions:
This should work if everything is setup correctly. BTW, Are you using Windows or Mac ? |
let me give the instructions a try. I'm curious how the ZIP compares to the repositories since I have been just using To be sure, you want to be grab Also, to eliminate all possible mismatches, which code set for I deployed this on a light weight linux server (no gui). Once it is up and running, I'll turn it into an appliance and bury it in cabinet with similar 24/7 services. |
Are you going to use a Raspberry for that? |
more or less. I have a bunch of RPi 'knockoffs' that were $15 each. Once I have a project working, I port it over and turn it into a little black box. |
utterances are still not fully working. Same as before. Tested with apple music. working:
not working:
Everything is working via URL and via the Skill Service Simulator. This suggests the issue is with processing utterances. |
What do you see when you issue these two commands in your browser?
|
the URLs all work as expected with Apple music set as the default service:
|
Okay, great. node-sonos-http-api should be good now. Let's pivot to echo-sonos. Try downloading this zip and install everything from here per the instructions. |
I've been want to play with an RPi and thinking about porting this to one just for S&Gs. Thinking about making it auto-upgrading and creating some install scripts to set all of this up in a much easier fashion. |
I compared the code and its identical. But I'll load it, all the same. |
Let me know when you get it loaded and we run some tests. |
its loaded. the results of my tests from a few posts back are unchanged |
Go to developer.amazon.com and go to the test link for your Alexa skill |
already there (you sure there is not a chat service you would rather move this to?) FYI - I emailed you also. |
Type change music to apple and then play boston |
both worked (this is via the Service Simulator text field I was using before. IT shows both the Lamdba Request JSON and the Lambda Response JSON)) |
try playing what you were asking Echo to do |
yes, that all works via the Simulator. Thsi is what I was trying to say above. the Simulator input is fine. my issue has been the actual utterances to Echo. The Echo seems to ignore the WORKS: Service Simulator: WORKS: Speek: |
Try a few more times on Echo. You must say clearly, Ask Sonos to... It is a PITA. I'm going to be talking to the Echo product manager on Tuesday about supporting multiple trigger words and be able to link alternates to a skill. That way you could say, Sonos, play boston. |
You havet o say Ask (or tell) SOnos every single time. If you don't, Echo goes native. I mess it up all of the time and it annoys the #3!! out of me. Ergo going to talk to the PM. |
I got it up and running. I've only been using the library music search. Working pretty well so far. I had an issue at first not finding some songs but realized I needed to reload the library but the reload command is not working: I deleted the library.json file and it repopulates it when you run it again but I can't reload it by the url. |
I'll check out the load command. How many tracks do you have in your library? |
Fixed the load command and created a new distribution. You can download musicSearch.js or add 'load' to the musicTypes array at the top. |
1891 tracks. Typed load in there and works now, thanks! |
No problem. The library index gets cached to disk as you noticed. You only need to issue the load command when you added or deleted from your library and want the changes reflected in musicSearch. |
There's a general desire to be able to act on the system without always specifying which room to act on.
Eg, issue #16 - it would be nice for JoinGroup to be able to look up what room is currently the coordinator and default to that. It would not work in a scenario in which multiple groups are playing at the same time, but there's no way around that except to specify a room.
Functions like next, previous, and so on would also benefit from this feature.
The text was updated successfully, but these errors were encountered: