Skip to content

[speech] v2 streamingRecognize returns RESOURCE_PROJECT_INVALID on regional endpoints (works via REST) #8288

@acaprino

Description

@acaprino

Environment details

  • OS: Windows 11 / repro also on Cloud Run nodejs22 base image
  • Node.js version: 22.x (Cloud Run) and 24.14.1 (local)
  • npm version: pnpm 11.1.1
  • @google-cloud/speech version: tested 6.7.1 and 7.3.1 — same bug in both

Steps to reproduce

StreamingRecognize on a regional STT v2 endpoint (e.g. europe-west4-speech.googleapis.com, europe-west2-speech.googleapis.com) returns gRPC INVALID_ARGUMENT — Invalid resource field value in the request with errorinfo.reason = RESOURCE_PROJECT_INVALID immediately after writing the first streamingConfig message, before any audio is sent.

The same recognizer path called via REST POST :recognize is accepted (the only error returned is INVALID_ARGUMENT about audio encoding when raw PCM is sent without a decoder config — i.e. path + auth + project are validated as OK by the server). This rules out IAM, recognizer state, project quotas, and the path itself.

Reproduced with:

  • Inline recognizer _ + chirp_2 in europe-west4 (a region where chirp_2 is listed as available and confirmed available via REST)
  • Pre-created ACTIVE recognizer with chirp_3 model in europe-west2 (model only available in europe-west2 and asia-southeast1)
  • Both roles/speech.client and additional roles/speech.editor granted to the calling SA — no difference

Project ID vs project number in the recognizer path makes no difference. quotaProjectId set to either also makes no difference. servicePath/apiEndpoint explicit setting makes no difference.

Minimal repro

// repro.mjs
// Run: PROJECT_ID=<your-project> node repro.mjs
import { v2 as speechV2 } from '@google-cloud/speech';

const PROJECT_ID = process.env.PROJECT_ID;
const STT_LOCATION = 'europe-west4';
const STT_MODEL = 'chirp_2';
const RECOGNIZER_ID = '_';

const apiEndpoint = `${STT_LOCATION}-speech.googleapis.com`;
const client = new speechV2.SpeechClient({ apiEndpoint });
const recognizer = `projects/${PROJECT_ID}/locations/${STT_LOCATION}/recognizers/${RECOGNIZER_ID}`;

const stream = client.streamingRecognize();
stream.on('error', (err) => {
  console.error('code:', err.code);
  console.error('msg :', err.message);
  console.error('reason:', err.reason);
  console.error('details:', JSON.stringify(err.statusDetails, null, 2));
});
stream.on('data', (r) => console.log('DATA:', JSON.stringify(r)));

stream.write({
  recognizer,
  streamingConfig: {
    config: {
      autoDecodingConfig: {},
      languageCodes: ['it-IT'],
      model: STT_MODEL,
      features: { enableAutomaticPunctuation: true },
    },
    streamingFeatures: { interimResults: true },
  },
});

// Send 1s of silence in 250ms chunks
const SILENCE = Buffer.alloc(8000);
let n = 0;
const t = setInterval(() => {
  if (n >= 4) { clearInterval(t); stream.end(); return; }
  stream.write({ audio: SILENCE });
  n++;
}, 250);

Expected behavior

Either:

  1. STT accepts the streaming config and returns recognition results (or an error specifically about the audio bytes, mirroring the REST :recognize behavior), or
  2. A clear error such as FAILED_PRECONDITION with a message about which field is invalid (e.g. "regional streaming is not supported for _ recognizer", "model X requires a pre-created recognizer", etc.).

Actual behavior

ERROR code: 3
ERROR msg : 3 INVALID_ARGUMENT: Invalid resource field value in the request.
ERROR reason: RESOURCE_PROJECT_INVALID
ERROR details: [
  {
    "reason": "RESOURCE_PROJECT_INVALID",
    "domain": "googleapis.com",
    "metadata": {
      "method": "google.cloud.speech.v2.Speech.StreamingRecognize",
      "service": "speech.googleapis.com"
    }
  }
]

The error message says Invalid resource field value but does not identify which resource field or why the project is considered invalid. The same project, same recognizer path, and same credentials all work via REST :recognize:

curl -X POST "https://europe-west4-speech.googleapis.com/v2/projects/<PROJECT_ID>/locations/europe-west4/recognizers/_:recognize" \
  -H "Authorization: Bearer $(gcloud auth print-access-token)" \
  -H "Content-Type: application/json" \
  -d '{"config":{"autoDecodingConfig":{},"languageCodes":["it-IT"],"model":"chirp_2"},"content":"<b64 silence>"}'
# → 400 INVALID_ARGUMENT: "Audio data does not appear to be in a supported encoding."
#   (path + project + auth accepted; only the audio content is rejected.)

Hypothesis

The gRPC streamingRecognize path in the Node SDK appears to send the request with project metadata that does not survive routing through the regional endpoint. The same call routed by REST works. Likely a client-side issue specific to the streaming path's metadata/routing or x-goog-request-params header construction.

Workaround

For the time being, applications stuck on this can:

  1. Use recognize (non-streaming) via REST, accepting batch-only operation, or
  2. Stay on global location — but most of the interesting models (chirp_2, chirp_3) are not in global.

Neither is great for real-time dictation use cases.

Happy to provide additional traces, gRPC channel debug output, or test against pre-release SDK versions if useful.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions