-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
URL to use with LUIS container #424
Comments
I figured out that I had an extra node (prediction) in the URL. I removed that and am getting a little further, but I still get a 400 back. When I use the swagger page to send a test request, it returns a 200, and I am setting identical props from what I can see. |
The Speech SDK should not be used against a LUIS container. For using the LUIS container, the LUIS SDK or LUIS REST API should be used. Speech SDK should be used against a speech container. |
I was following the example for the Speech SDK, from here: https://github.com/Azure-Samples/cognitive-services-speech-sdk The scenario is that we are detecting the audio directly from the PC microphone and trying to determine the intent, based on the LUIS app we trained. The example I linked to does exactly that. And it works very well with the LUIS cloud-based service. Using the Speech SDK seemed to save us from having to make a separate explicit call to the speech-to-text API and then a second call to LUIS. So, all I am attempting to do is switch from the scenario of using LUIS in the cloud to using the LUIS container. I can't imagine if the Speech SDK works for one, it won't work for the other. |
A cloud is different than a container. A cloud can be composed of multiple aggregated containers (sometimes called micro services). So there is a LUIS container and then there is a Speech container -- 2 separate containers. The Speech container only does speech. The LUIS container only does LUIS. In the cloud, because both containers are known to be deployed, and it is bad performance for a remote client to go to the cloud, do speech, come back, then go to the cloud again and do LUIS, we provide a feature that allows the client to go to Speech, stay in the cloud, go to LUIS then come back to the client. Thus even in this scenario the Speech SDK goes to Speech cloud container with audio, and then Speech cloud container talks to LUIS cloud container with text. The LUIS container has no concept of accepting audio (it would not make sense for LUIS container to accept streaming audio -- LUIS is a text based service). With on-prem, we have no certainty our customer has deployed both containers, we don't presume to orchestrate between containers in our customers prem, and if both containers are deployed on-prem, given they are more local to the client, it is not a burden to go do the SR first, get the recognized text back in the client, and have the customer then take that text and go to LUIS. |
Okay. Can you help me understand what is meant by "intent recognition" in the Speech SDK docs: From that description, it seems like the Speech SDK is able to also serve as a client for the LUIS service and have it recognize the intents (which is exactly what the sample illustrates). It makes no mention of whether the LUIS instance is in the cloud or in a local container. That's where my confusion is coming from. Are you saying that if I am running the LUIS container locally, I have to also run the Speech-to-text container locally and change my code to handle brokering the inputs/outputs between those two? Where is there any documentation on how to do that? I cannot find any information on that. |
^^Can you help me understand what is meant by "intent recognition" in the Speech SDK docs:
^^ From that description, it seems like the Speech SDK is able to also ^^ Are you saying that if I am running the LUIS container locally, ^^ Where is there any documentation on how to do that? |
Okay. I understand your point about the cloud orchestration. Does this imply that when I'm running the containers locally that I need to run both the Speech-to-text and the LUIS containers and manually handle the orchestration in my code? And if so, I need to use both the Speech and LUIS SDKs? Is there any documentation on this approach? |
^^ Does this imply that when I'm running the containers locally that I need to run ^^ And if so, I need to use both the Speech and LUIS SDKs? ^^ Is there any documentation on this approach? |
I have the Speech SDK switched to use the SpeechRecognizer rather than the IntentRecognizer and am attempting to implement the LUIS SDK code. I am getting a "Not Found" exception and I think it's because there is a disconnect between the LUIS container URL and the URL that is built in the SDK. I have this C# code using the LUIS SDK (v3.0):
This builds the following URL, based on my params and the host IP/port: http://192.168.1.91:5001/luis/prediction/v3.0/apps/LUISAPPID/slots/production/predict The URL I see when I view and execute a request from the Swagger page is: http://192.168.1.91:5001/luis/v3.0/apps/LUISAPPID/slots/production/predict Notice, there is no "prediction" node in the Swagger request after "luis". So, either the container needs to add that to the URL or the SDK needs to remove it. Which is correct? |
Support for LUIS SDK/container is done by the LUIS team. This area is the Speech team. However, looking at the docs for LUIS container here: https://docs.microsoft.com/en-us/azure/cognitive-services/LUIS/luis-container-howto?tabs=v3#query-the-luis-app they have simple examples using the LUIS REST API with curl. Can you try that? |
Yes - I implemented the REST call and built the request URL myself and have a working scenario now. It's not the most elegant solution, but it demos the concepts at least. Thanks for the help. |
Closing the issue as there seems to be a resolution, thanks for the feedback |
Is there any proper code sample to have a LUIS client and request on container ? Proper base URL use in container |
I am using the LUIS container in an IoT Edge deployment and am attempting to call the LUIS prediction endpoint from another container. The LUIS container is listening on port 5001, and the URL I'm using is this:
The error I'm getting is this:
WebSocket Upgrade failed with HTTP status code: 404 SessionId: 3cfe2509ef4e49919e594abf639ccfeb
I see the request in the LUIS container logs and the message says: "The request path /luis//predict" does not match a supported file type".
What does this mean? What am I missing?
The text was updated successfully, but these errors were encountered: