-
Notifications
You must be signed in to change notification settings - Fork 2.9k
Closed
Labels
🚨This issue needs some love.This issue needs some love.triage meI really want to be triaged.I really want to be triaged.type: bugError or flaw in code with unintended results or allowing sub-optimal usage patterns.Error or flaw in code with unintended results or allowing sub-optimal usage patterns.
Description
I setup a demo of the Google Cloud Speech API running on Android with AudioRecord to retrieve audio from the microphone.
Its working but after a while (+/- 1 minute) the channel closes by itself.
I basically used the java StreamRecognizeClient example, including the last changes to the gRPC 1.0 and the ManagedChannelBuilder
There is a definition to set a timeout or am I doing something wrong here?
Here is the main code setup:
Setup channel
private static final List<String> OAUTH2_SCOPES = Arrays.asList("https://www.googleapis.com/auth/cloud-platform");
public static ManagedChannel createChannel(InputStream authorizationFile, String host, int port) throws IOException {
GoogleCredentials creds = GoogleCredentials.fromStream(authorizationFile);
creds = creds.createScoped(OAUTH2_SCOPES);
ManagedChannel channel =
ManagedChannelBuilder.forAddress(host, port)
.intercept(new ClientAuthInterceptor(creds, Executors.newSingleThreadExecutor()))
.build();
return channel;
}Recognize setup and loop
public void recognize() throws InterruptedException, IOException {
try {
// Build and send a StreamingRecognizeRequest containing the parameters for
// processing the audio.
RecognitionConfig config =
RecognitionConfig.newBuilder()
.setEncoding(RecognitionConfig.AudioEncoding.LINEAR16)
.setSampleRate(this.RECORDER_SAMPLERATE)
//.setLanguageCode("en-US")
.build();
// Sreaming config
StreamingRecognitionConfig streamingConfig =
StreamingRecognitionConfig.newBuilder()
.setConfig(config)
.setInterimResults(true)
.setSingleUtterance(false)
.build();
// First request
StreamingRecognizeRequest initial =
StreamingRecognizeRequest.newBuilder().setStreamingConfig(streamingConfig).build();
requestObserver.onNext(initial);
// Microphone listener and recorder
recorder = new AudioRecord(MediaRecorder.AudioSource.MIC,
this.RECORDER_SAMPLERATE,
this.RECORDER_CHANNELS,
this.RECORDER_AUDIO_ENCODING,
bufferSize);
recorder.startRecording();
byte[] buffer = new byte[bufferSize];
int recordState;
// loop through the audio samplings
while ( (recordState = recorder.read(buffer, 0, buffer.length) ) > -1 ) {
// skip if there is no data
if( recordState < 0 )
continue;
// create a new recognition request
StreamingRecognizeRequest request =
StreamingRecognizeRequest.newBuilder()
.setAudioContent(ByteString.copyFrom(buffer, 0, buffer.length))
.build();
// put it on the works
requestObserver.onNext(request);
}
} catch (RuntimeException e) {
// Cancel RPC.
requestObserver.onError(e);
throw e;
}
// Mark the end of requests.
requestObserver.onCompleted();
}The full code is in this repo: https://github.com/Cloudoki/android-google-cloud-speech-api
Need some help here, because I'm not figuring it out!
Thanks
Metadata
Metadata
Labels
🚨This issue needs some love.This issue needs some love.triage meI really want to be triaged.I really want to be triaged.type: bugError or flaw in code with unintended results or allowing sub-optimal usage patterns.Error or flaw in code with unintended results or allowing sub-optimal usage patterns.