Skip to content

Curl works but Python fails for the same API key and same request #377

@tristandeborde

Description

@tristandeborde

Describe the bug

I'm trying to use the ChatCompletion API by following the documentation examples.
I'm getting: openai.error.InvalidRequestError: Invalid URL (POST /v1/engines/gpt-3.5-turbo/chat/completions)

  • I did check that I'm using ChatCompletion and not Completion.
  • I generated another API key to solve the problem.

This is strictly a bug from the Python lib since using curl, the POST request works.

⚠️ I feel like I should mention I was recently accepted in the GPT-4 beta (according to an email I've received), so it may be due to this ?

Thanks in advance for your help.

To Reproduce

Just use the ChatCompletion python example from the docs. This worked fine for me until 2 weeks ago.

Code snippets

import openai
openai.api_key="myAPIkey"
openai.organization = "myorg" # I've tried without this as well
openai.ChatCompletion.create(engine="gpt-3.5-turbo",messages=[{"role": "user", "content":"yo"}])
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/tristan/miniforge3/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 25, in create
    return super().create(*args, **kwargs)
  File "/Users/tristan/miniforge3/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
  File "/Users/tristan/miniforge3/lib/python3.10/site-packages/openai/api_requestor.py", line 226, in request
    resp, got_stream = self._interpret_response(result, stream)
  File "/Users/tristan/miniforge3/lib/python3.10/site-packages/openai/api_requestor.py", line 619, in _interpret_response
    self._interpret_response_line(
  File "/Users/tristan/miniforge3/lib/python3.10/site-packages/openai/api_requestor.py", line 679, in _interpret_response_line
    raise self.handle_error_response(
openai.error.InvalidRequestError: Invalid URL (POST /v1/engines/gpt-3.5-turbo/chat/completions)

The above code fails, but this works!

curl https://api.openai.com/v1/chat/completions \                          
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -d '{
     "model": "gpt-3.5-turbo",
     "messages": [{"role": "user", "content": "Say this is a test!"}],
     "temperature": 0.7
   }'
{"id":"....","object":"chat.completion","created":1680624274,"model":"gpt-3.5-turbo-0301","usage":{"prompt_tokens":14,"completion_tokens":5,"total_tokens":19},"choices":[{"message":{"role":"assistant","content":"This is a test!"},"finish_reason":"stop","index":0}]}

OS

macOS Monterey 12.6

Python version

Python 3.10.6, but fails also in python:3.10 docker image

Library version

openai-python v0.27.0, also v0.27.3

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions