Skip to content

Gemini 2.5 Pro Batch API: Jobs stuck in PENDING/RUNNING for over 24 hours #2221

@4serviceSoftware

Description

@4serviceSoftware

Hello everyone,

I’m experiencing a critical issue with the Gemini 2.5 Pro model via Batch API. My jobs have been stuck in the JOB_STATE_PENDING (and some in RUNNING) state for more than 24 hours without any output or error messages.

Details of the issue:

Model: Gemini 2.5 Pro

Current Behavior: The system logs show successful batch_response_get requests, but the internal job status remains pending indefinitely.

Duration: 24+ hours (and counting).

Impact: This is stalling our production pipeline and data processing.

I have noticed several other developers reporting similar issues recently (some mentioning delays up to 4 days with 2.5 Flash as well). It seems like a broader infrastructure bottleneck rather than an isolated request error.

Questions for the community/Google team:

Is there a known outage or a massive backlog for Batch processing in specific regions?

Should we keep these jobs running, or is it better to cancel and resubmit? (Though resubmitting seems to lead to the same result).

Are there any internal timeout limits we should be aware of for Gemini 2.5 Pro batch jobs?

Metadata

Metadata

Labels

priority: p2Moderately-important priority. Fix may not be included in next release.type: bugError or flaw in code with unintended results or allowing sub-optimal usage patterns.

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions