Skip to content

[Bug] Unexpected "prompt token count exceeds the limit of 0" error with o3-mini #32

Open
@jesusreal

Description

@jesusreal

Describe the bug
I have a working GitHub copilot extension using the 'gpt-4o-mini' and 'gpt-4o' models. I am trying to use now just the o3-mini model, but I get this error:

 {
    message: 'prompt token count of 1083 exceeds the limit of 0',
    code: 'model_max_prompt_tokens_exceeded'
 }

I am creating an api client like this:

const apiClient = new OpenAI({
    baseURL: 'https://api.githubcopilot.com',
    apiKey
  });

I am creating a stream like this

    const stream = await apiClient.chat.completions.create({
      stream: true,
      model: 'o3-mini',
      messages: [systemMessage, ...userMessages],
      max_completion_tokens: 4096,
      max_tokens: 4096
    });

I added max_completion_tokens and max_tokens as an attempt to fix the issue, but the same error still persist.

I researched and it seems that it should be possible to set a proper limit of allowed usage for the model somewhere.

The error “OpenAI o3-mini exceeds the limit of 0” likely indicates that you have exhausted your usage limit for the o3-mini model. This means you’ve reached the maximum number of tokens (words or parts of words) you can process per minute, hour, or day, depending on your specific plan.

I asked my enterprise admin but he didn't find any option to do so. I appreciate any comment that helps me solving the issue 🙏

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions