You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[bug] openai.error.InvalidRequestError: This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?
#30
Closed
signebedi opened this issue
Mar 28, 2023
· 2 comments
[question] tell me three interesting world capital cities
..... Traceback (most recent call last):
File "/home/sig/Code/gptty/venv/bin/gptty", line 11, in <module>
load_entry_point('gptty', 'console_scripts', 'gptty')()
File "/home/sig/Code/gptty/venv/lib/python3.8/site-packages/click/core.py", line 1130, in __call__
return self.main(*args, **kwargs)
File "/home/sig/Code/gptty/venv/lib/python3.8/site-packages/click/core.py", line 1055, in main
rv = self.invoke(ctx)
File "/home/sig/Code/gptty/venv/lib/python3.8/site-packages/click/core.py", line 1657, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/home/sig/Code/gptty/venv/lib/python3.8/site-packages/click/core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/sig/Code/gptty/venv/lib/python3.8/site-packages/click/core.py", line 760, in invoke
return __callback(*args, **kwargs)
File "/home/sig/Code/gptty/gptty/__main__.py", line 77, in chat
asyncio.run(chat_async_wrapper(config_path))
File "/usr/lib/python3.8/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/usr/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
return future.result()
File "/home/sig/Code/gptty/gptty/__main__.py", line 107, in chat_async_wrapper
await create_chat_room(configs=configs, config_path=config_path)
File "/home/sig/Code/gptty/gptty/gptty.py", line 137, in create_chat_room
response = await response_task
File "/home/sig/Code/gptty/gptty/gptty.py", line 43, in fetch_response
return await openai.Completion.acreate(
File "/home/sig/Code/gptty/venv/lib/python3.8/site-packages/openai/api_resources/completion.py", line 45, in acreate
return await super().acreate(*args, **kwargs)
File "/home/sig/Code/gptty/venv/lib/python3.8/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 217, in acreate
response, _, api_key = await requestor.arequest(
File "/home/sig/Code/gptty/venv/lib/python3.8/site-packages/openai/api_requestor.py", line 310, in arequest
resp, got_stream = await self._interpret_async_response(result, stream)
File "/home/sig/Code/gptty/venv/lib/python3.8/site-packages/openai/api_requestor.py", line 645, in _interpret_async_response
self._interpret_response_line(
File "/home/sig/Code/gptty/venv/lib/python3.8/site-packages/openai/api_requestor.py", line 682, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?
[model] add support for chatcompletion models
Currently, we only support completion models. Subsequently, we will need to find a way to support chat completion too.
Using
gpt-3.5-turbo
model:https://stackoverflow.com/questions/75774873/openai-chatgpt-gpt-3-5-api-error-this-is-a-chat-model-and-not-supported-in-t
The text was updated successfully, but these errors were encountered: