-
Notifications
You must be signed in to change notification settings - Fork 661
Update mic stream example #761
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Codecov Report
@@ Coverage Diff @@
## master #761 +/- ##
=======================================
Coverage 83.16% 83.16%
=======================================
Files 35 35
Lines 4395 4395
Branches 555 555
=======================================
Hits 3655 3655
Misses 361 361
Partials 379 379
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just realized that the package.json needs to be updated for the version of watson-developer-cloud in this example. But I confirm that the example works with the change
|
You mean the It has the line |
|
@dpopp07 No, I mean the package.json in the |
|
Actually I just noticed that all the package.json files inside the examples folder inside the examples that have their own package.json inside their individual folder should probably be updated with that but that is a separate issue |
|
I see that now, thank you for clarifying. About to push a commit with all of those updated |
|
Also, I verified that the examples work with the updated SDK versions. |
|
@dpopp07 Sorry for not seeing that comment about |
|
Is there a similar issue with another sample STT code? (https://github.com/watson-developer-cloud/node-sdk/blob/master/examples/speech_to_text_microphone_input/transcribe-mic-to-file.js) I pulled down the sample code and ran it, the recorded mic wav file sounds funny (like a slow mo recording where I record 2 seconds, but it gives me 8 seconds or something). However, I put interim_results (true) there doesn't help. Did I miss anything else? Would you please take a look? Thanks! I am on Windows 10. |
|
Hi @qunliu, I tested that example and found that it was not affected by the above problem. This example is for listening to audio from the microphone and writing the transcribed words to a text file. You should not be expecting an audio file. If you are having trouble with a text-to-speech example, please open an issue with your code and I will be happy to take a look! |
|
Thanks for looking into it @dpopp07 I am a still bit confused. If I understand this example correctly, it does a few things (please see the screenshot below):
In my test, the transcription was all wrong, so that's where I started looking at that audio file recorded on disk and noticed that was wrong as well which in turn gave wrong transcription. Sorry if I totally misunderstood the example. Please let me know what you think. Thanks! (Note: I still use createRecognizeStream since the code uses recognizeUsingWebSocket is not published yet and it'd give me error message) |
|
🎉 This PR is included in version 3.8.0 🎉 The release is available on: Your semantic-release bot 📦🚀 |

During a support session, I discovered that the example for using STT to stream from the microphone to the console was broken. This was due to the
interim_resultsparameter not being passed in astrue, as it defaults tofalse.I updated the example to use the correct parameter and changed the JSDocs for
RecognizeStreambecause they incorrectly listed the default value forinterim_resultsastrue.ref:
From the Speech to Text docs:
cc @jeffpk62