Skip to content
This repository has been archived by the owner on Aug 10, 2023. It is now read-only.

[BUG]openai.error.ServiceUnavailableError: The server is overloaded or not ready yet. #608

Closed
Lynnjl opened this issue Feb 8, 2023 · 20 comments
Labels
bug Something isn't working

Comments

@Lynnjl
Copy link

Lynnjl commented Feb 8, 2023

When I run the command: python3 -m revChatGPT.Official --api_key API_KEY --stream
the following error : penai.error.ServiceUnavailableError: The server is overloaded or not ready yet

@Lynnjl Lynnjl added the bug Something isn't working label Feb 8, 2023
@txtspam
Copy link

txtspam commented Feb 8, 2023

i got it also:

openai.error.RateLimitError: The server is currently overloaded with other requests. Sorry about that! You can retry your request, or contact us through our help center at help.openai.com if the error persists.

maybe they UP the model but doing a ratelimit. ? maybe openai dev is watching us here

@leolrg
Copy link

leolrg commented Feb 8, 2023

Same

@cateyelow
Copy link

same help me

@Lynnjl
Copy link
Author

Lynnjl commented Feb 8, 2023

same help me

I revert to davinci-3 using export GPT_ENGINE="text-davinci-003". and the problem is fixed.

@txtspam
Copy link

txtspam commented Feb 8, 2023

same help me

heres the temporarily working Official.py solution but it costs you:

from
ENGINE = os.environ.get("GPT_ENGINE") or "text-chat-davinci-002-20221122"
to
ENGINE = os.environ.get("GPT_ENGINE") or "text-davinci-003"

from
ENCODER = tiktoken.get_encoding("gpt2")
to
ENCODER = tiktoken.get_encoding("p50k_base")

from
if response["choices"][0]["text"] == "<|im_end|>": break
to
if (response["choices"][0]["text"].strip() == "<|im_end|>" or response["choices"][0]["text"].strip() == "<|im"): break
find this in code:
Add request/response to chat history for next prompt
look for
<|im_end|>
then remove it and it would look like
"\n"

hopes it helps for a while

thanks and credits to
pengzhile
coolmian

@cateyelow
Copy link

text-davinci-003 is paid model. i have no money

@acheong08
Copy link
Owner

I don't either. Wait a while for the servers to recover from overload

@xjd-ziyu
Copy link

xjd-ziyu commented Feb 8, 2023

I don't either. Wait a while for the servers to recover from overload

It seems that the current official model is unstable and often affected by the official openai. I look forward to your unofficial browserless version. Thank you.

@siyuyuan
Copy link

siyuyuan commented Feb 8, 2023

I don't either. Wait a while for the servers to recover from overload

i have the same problem

openai.error.ServiceUnavailableError: The server is overloaded or not ready yet.

@txtspam
Copy link

txtspam commented Feb 8, 2023

same help me

heres the temporarily working Official.py solution but it costs you:

from ENGINE = os.environ.get("GPT_ENGINE") or "text-chat-davinci-002-20221122" to ENGINE = os.environ.get("GPT_ENGINE") or "text-davinci-003"

from ENCODER = tiktoken.get_encoding("gpt2") to ENCODER = tiktoken.get_encoding("p50k_base")

from if response["choices"][0]["text"] == "<|im_end|>": break to if (response["choices"][0]["text"].strip() == "<|im_end|>" or response["choices"][0]["text"].strip() == "<|im"): break find this in code: Add request/response to chat history for next prompt look for <|im_end|> then remove it and it would look like "\n"

hopes it helps for a while

thanks and credits to pengzhile coolmian

I don't either. Wait a while for the servers to recover from overload

i have the same problem

openai.error.ServiceUnavailableError: The server is overloaded or not ready yet.

Use this temporarily...

@siyuyuan
Copy link

siyuyuan commented Feb 8, 2023

same help me

heres the temporarily working Official.py solution but it costs you:
from ENGINE = os.environ.get("GPT_ENGINE") or "text-chat-davinci-002-20221122" to ENGINE = os.environ.get("GPT_ENGINE") or "text-davinci-003"
from ENCODER = tiktoken.get_encoding("gpt2") to ENCODER = tiktoken.get_encoding("p50k_base")
from if response["choices"][0]["text"] == "<|im_end|>": break to if (response["choices"][0]["text"].strip() == "<|im_end|>" or response["choices"][0]["text"].strip() == "<|im"): break find this in code: Add request/response to chat history for next prompt look for <|im_end|> then remove it and it would look like "\n"
hopes it helps for a while
thanks and credits to pengzhile coolmian

I don't either. Wait a while for the servers to recover from overload

i have the same problem
openai.error.ServiceUnavailableError: The server is overloaded or not ready yet.

Use this temporarily...

but it is costly :(

@hafidzaini
Copy link

so, no solution yet for this problem without paying? 😂

@ericzhou571
Copy link

Hi,

thanks for your solution. But why you change tokenizer to "p50k_base"? text-davinci-003 is supposed to be a gpt3.5 model, so does text-chat-davinci-002. Since text-chat-davinci-002, I think we should still use "gpt2" tokenizer right?
Maybe you have special reason to do so? It would be very nice if you could give more explain to this change.

Best
Erich

@seanhuangjf
Copy link

the same problem: openai.error.ServiceUnavailableError: The server is overloaded or not ready yet.

@faissaloo
Copy link

I'm getting the same thing

@hafidzaini
Copy link


is "text-davinci-002-render" could be real model name? anyone tried?

@leolrg
Copy link

leolrg commented Feb 8, 2023

is "text-davinci-002-render" could be real model name? anyone tried?

doesn't work

@acheong08
Copy link
Owner

@acheong08
Copy link
Owner

works now!

@myfingerhurt
Copy link

I think the text-davinci-003 is dumper than text-chat-davinci-002-20221122 and has more misleading information.

By the way text-chat-davinci-002-sh-alpha-aoruigiofdj83 is not working any more.

InvalidRequestError(message='That model does not exist', param=None, code=None, http_status=404, request_id=None)

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests