Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Connection to api.openai.com timed out #3823

Closed
1 task done
deRek8866Rk opened this issue May 5, 2023 · 22 comments
Closed
1 task done

Connection to api.openai.com timed out #3823

deRek8866Rk opened this issue May 5, 2023 · 22 comments
Labels
API access Trouble with connecting to the API

Comments

@deRek8866Rk
Copy link

⚠️ Search for existing issues first ⚠️

  • I have searched the existing issues, and there is no existing issue for my problem

Which Operating System are you using?

Docker

Which version of Auto-GPT are you using?

Latest Release

GPT-3 or GPT-4?

GPT-4

Steps to reproduce 🕹

Current behavior 😯

Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/urllib3/connection.py", line 174, in _new_conn
conn = connection.create_connection(
File "/usr/local/lib/python3.10/site-packages/urllib3/util/connection.py", line 95, in create_connection
raise err
File "/usr/local/lib/python3.10/site-packages/urllib3/util/connection.py", line 85, in create_connection
sock.connect(sa)
TimeoutError: [Errno 110] Connection timed out

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 703, in urlopen
httplib_response = self._make_request(
File "/usr/local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 386, in _make_request
self._validate_conn(conn)
File "/usr/local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1042, in _validate_conn
conn.connect()
File "/usr/local/lib/python3.10/site-packages/urllib3/connection.py", line 363, in connect
self.sock = conn = self._new_conn()
File "/usr/local/lib/python3.10/site-packages/urllib3/connection.py", line 179, in _new_conn
raise ConnectTimeoutError(
urllib3.exceptions.ConnectTimeoutError: (<urllib3.connection.HTTPSConnection object at 0x7f3f8bbde830>, 'Connection to api.openai.com timed out. (connect timeout=600)')

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/requests/adapters.py", line 487, in send
resp = conn.urlopen(
File "/usr/local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 815, in urlopen
return self.urlopen(
File "/usr/local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 815, in urlopen
return self.urlopen(
File "/usr/local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 787, in urlopen
retries = retries.increment(
File "/usr/local/lib/python3.10/site-packages/urllib3/util/retry.py", line 592, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='api.openai.com', port=443): Max retries exceeded with url: /v1/chat/completions (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7f3f8bbde830>, 'Connection to api.openai.com timed out. (connect timeout=600)'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 516, in request_raw
result = _thread_context.session.request(
File "/usr/local/lib/python3.10/site-packages/requests/sessions.py", line 587, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python3.10/site-packages/requests/sessions.py", line 701, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python3.10/site-packages/requests/adapters.py", line 508, in send
raise ConnectTimeout(e, request=request)
requests.exceptions.ConnectTimeout: HTTPSConnectionPool(host='api.openai.com', port=443): Max retries exceeded with url: /v1/chat/completions (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7f3f8bbde830>, 'Connection to api.openai.com timed out. (connect timeout=600)'))

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/usr/local/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/local/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/app/autogpt/main.py", line 5, in
autogpt.cli.main()
File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1130, in call
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1055, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1635, in invoke
rv = super().invoke(ctx)
File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.10/site-packages/click/core.py", line 760, in invoke
return __callback(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/click/decorators.py", line 26, in new_func
return f(get_current_context(), *args, **kwargs)
File "/app/autogpt/cli.py", line 90, in main
run_auto_gpt(
File "/app/autogpt/main.py", line 171, in run_auto_gpt
agent.start_interaction_loop()
File "/app/autogpt/agent/agent.py", line 112, in start_interaction_loop
assistant_reply = chat_with_ai(
File "/app/autogpt/llm/chat.py", line 245, in chat_with_ai
assistant_reply = create_chat_completion(
File "/app/autogpt/llm/llm_utils.py", line 166, in create_chat_completion
response = api_manager.create_chat_completion(
File "/app/autogpt/llm/api_manager.py", line 55, in create_chat_completion
response = openai.ChatCompletion.create(
File "/usr/local/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 25, in create
return super().create(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 216, in request
result = self.request_raw(
File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 526, in request_raw
raise error.Timeout("Request timed out: {}".format(e)) from e
openai.error.Timeout: Request timed out: HTTPSConnectionPool(host='api.openai.com', port=443): Max retries exceeded with url: /v1/chat/completions (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7f3f8bbde830>, 'Connection to api.openai.com timed out. (connect timeout=600)'))

Expected behavior 🤔

No response

Your prompt 📝

# Paste your prompt here

Your Logs 📒

<insert your logs here>
@AndresCdo
Copy link
Contributor

Check if you have setup billing or if you have enough credit.

@k-boikov
Copy link
Contributor

k-boikov commented May 5, 2023

Are you from China? If yes - OpenAI is not supported in China.

@k-boikov k-boikov added the API access Trouble with connecting to the API label May 6, 2023
@guacamole-hunter
Copy link

Well it's timing out because it's not receiving an response. Definitely check your billing or if you are even able to just ping the api from your location. Or maybe typos in your api key.

Cheers,

@Nantris
Copy link

Nantris commented May 7, 2023

How can we retry without having to go through 5 steps of outlining the goals for the AI again every attempt? There's no obvious cause for this error and it's very difficult to debug when you need to re-enter all this text in 5 separate prompts for every attempt.

It seems like an ai_settings.yaml file would help but there's seemingly no template on the web? (related #1106)

@anonhostpi
Copy link

Current list of unsupported regions (may be inaccurate, please check with your API provider and your local ordinance)

A. Afghanistan
B. Bahrain, Belarus, Burma (Myanmar)
C. Central African Republic, Chad, China, Cuba
E. Eritrea, Ethiopia
I. Iran
K. Kazahkstan, North Korea
L. Laos, Libya
M. Macedonia, Marshall Islands, Mauritius, Micronesia, Burma (Myanmar)
N. Nauru, Nepal, North Korea
P. Palau
R. Russia
S. Saint Kitts and Nevis, Saint Lucia, Saint Vincent and Grenadines, Somalia, South Sudan, Sudan, Syria
T. Tonga, Turkmenistan
U. Ukraine, Uzbekistan
V. Venezuela
Y. Yemen

@Nantris
Copy link

Nantris commented May 9, 2023

Thanks for the list @anonhostpi. Unfortunately it doesn't account for my problem since I'm in USA.

@anonhostpi
Copy link

What does a ping of api.openai.com tell you?

@Nantris
Copy link

Nantris commented May 9, 2023

On the host machine the ping works without any issues but openaipublic.blob.core.windows.net returns Reply from 20.150.77.132: Destination host unreachable.

I already tried rebooting, reinstalling Docker, ipconfig /flushdns, so I'm not really sure what else I can try.

Interestingly the site that replies Destination host unreachable does appear reachable if I visit this link: https://openaipublic.blob.core.windows.net/critiques/README.md

@anonhostpi
Copy link

That webpage maybe cached, try clearing your cache and reloading that webpage.

@guacamole-hunter
Copy link

guacamole-hunter commented May 9, 2023

Hey I just noticed something. Your url is wrong. v1/{ai model here__}/chat/completions. It's missing the ai model

It says "max retries with v1/chat/completions"

@anonhostpi
Copy link

anonhostpi commented May 9, 2023

So we know it's not API key related, because this is what that url does when you don't provide a key:

image

@anonhostpi
Copy link

anonhostpi commented May 9, 2023

So this looks like the problematic line:

image

Here's that line in the source code: https://github.com/Significant-Gravitas/Auto-GPT/blob/master/autogpt/llm/api_manager.py#L56.

So whatever is providing model to that function doesn't seem to be doing it correctly.

Traced it back through the files, and this is where model is set:

https://github.com/Significant-Gravitas/Auto-GPT/blob/master/autogpt/llm/chat.py#LL80C39-L80C39

@Nantris
Copy link

Nantris commented May 9, 2023

The page isn't cached for me and in our case we don't seem to have any line in the logging like v1/{ai model here__}/chat/completions

Here's our full log output. In our case the error is [Errno 101] Network is unreachable')

Is there some way I can debug the docker machine to ensure it's at least accessing the network?

Full log output: Traceback (most recent call last): File "/usr/local/lib/python3.10/site-packages/urllib3/connection.py", line 174, in _new_conn conn = connection.create_connection( File "/usr/local/lib/python3.10/site-packages/urllib3/util/connection.py", line 95, in create_connection raise err File "/usr/local/lib/python3.10/site-packages/urllib3/util/connection.py", line 85, in create_connection sock.connect(sa) OSError: [Errno 101] Network is unreachable

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 703, in urlopen
httplib_response = self._make_request(
File "/usr/local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 386, in _make_request
self._validate_conn(conn)
File "/usr/local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1042, in _validate_conn
conn.connect()
File "/usr/local/lib/python3.10/site-packages/urllib3/connection.py", line 363, in connect
self.sock = conn = self._new_conn()
File "/usr/local/lib/python3.10/site-packages/urllib3/connection.py", line 186, in _new_conn
raise NewConnectionError(
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x7fe5899742b0>: Failed to establish a new connection: [Errno 101] Network is unreachable

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/requests/adapters.py", line 487, in send
resp = conn.urlopen(
File "/usr/local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 787, in urlopen
retries = retries.increment(
File "/usr/local/lib/python3.10/site-packages/urllib3/util/retry.py", line 592, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /encodings/cl100k_base.tiktoken (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fe5899742b0>: Failed to establish a new connection: [Errno 101] Network is unreachable'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/local/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/local/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/app/autogpt/main.py", line 5, in
autogpt.cli.main()
File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1130, in call
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1055, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1635, in invoke
rv = super().invoke(ctx)
File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.10/site-packages/click/core.py", line 760, in invoke
return __callback(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/click/decorators.py", line 26, in new_func
return f(get_current_context(), *args, **kwargs)
File "/app/autogpt/cli.py", line 90, in main
run_auto_gpt(
File "/app/autogpt/main.py", line 171, in run_auto_gpt
agent.start_interaction_loop()
File "/app/autogpt/agent/agent.py", line 112, in start_interaction_loop
assistant_reply = chat_with_ai(
File "/app/autogpt/llm/chat.py", line 111, in chat_with_ai
) = generate_context(prompt, relevant_memory, full_message_history, model)
File "/app/autogpt/llm/chat.py", line 54, in generate_context
current_tokens_used = count_message_tokens(current_context, model)
File "/app/autogpt/llm/token_counter.py", line 28, in count_message_tokens
encoding = tiktoken.encoding_for_model(model)
File "/usr/local/lib/python3.10/site-packages/tiktoken/model.py", line 75, in encoding_for_model
return get_encoding(encoding_name)
File "/usr/local/lib/python3.10/site-packages/tiktoken/registry.py", line 63, in get_encoding
enc = Encoding(**constructor())
File "/usr/local/lib/python3.10/site-packages/tiktoken_ext/openai_public.py", line 64, in cl100k_base
mergeable_ranks = load_tiktoken_bpe(
File "/usr/local/lib/python3.10/site-packages/tiktoken/load.py", line 114, in load_tiktoken_bpe
contents = read_file_cached(tiktoken_bpe_file)
File "/usr/local/lib/python3.10/site-packages/tiktoken/load.py", line 46, in read_file_cached
contents = read_file(blobpath)
File "/usr/local/lib/python3.10/site-packages/tiktoken/load.py", line 24, in read_file
return requests.get(blobpath).content
File "/usr/local/lib/python3.10/site-packages/requests/api.py", line 73, in get
return request("get", url, params=params, **kwargs)
File "/usr/local/lib/python3.10/site-packages/requests/api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/local/lib/python3.10/site-packages/requests/sessions.py", line 587, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python3.10/site-packages/requests/sessions.py", line 701, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python3.10/site-packages/requests/adapters.py", line 520, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /encodings/cl100k_base.tiktoken (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fe5899742b0>: Failed to establish a new connection: [Errno 101] Network is unreachable'))

@anonhostpi
Copy link

No, but you do have v1/chat/completions, which suggests that the LLM isn't being selected correctly

@anonhostpi
Copy link

anonhostpi commented May 9, 2023

https://github.com/Significant-Gravitas/Auto-GPT/blob/master/autogpt/config/config.py#L41

What is your FAST_LLM_MODEL set to?

Thank you @guacamole-hunter

@Nantris
Copy link

Nantris commented May 9, 2023

I haven't set the FAST_LLM_MODEL explicitly but it says it defaults to gpt-3.5-turbo

Here's the full .env file I'm using with my key redacted. Besides the OPENAI_API_KEY the only other setting I configured is AI_SETTINGS_FILE because entering a bunch of stuff on every attempt was too much. Everything else should be default and I didn't notice the setup guide say anything else needed to be changed:

(Note: slashes added to avoid Github markdown conversion)

`.env` /################################################################################ /### AUTO-GPT - GENERAL SETTINGS /################################################################################

/## EXECUTE_LOCAL_COMMANDS - Allow local command execution (Default: False)
/## RESTRICT_TO_WORKSPACE - Restrict file operations to workspace ./auto_gpt_workspace (Default: True)
/# EXECUTE_LOCAL_COMMANDS=False
/# RESTRICT_TO_WORKSPACE=True

/## USER_AGENT - Define the user-agent used by the requests library to browse website (string)
/# USER_AGENT="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36"

/## AI_SETTINGS_FILE - Specifies which AI Settings file to use (defaults to ai_settings.yaml)
AI_SETTINGS_FILE=ai_settings.yaml

/## AUTHORISE COMMAND KEY - Key to authorise commands
/# AUTHORISE_COMMAND_KEY=y
/## EXIT_KEY - Key to exit AUTO-GPT
/# EXIT_KEY=n

/################################################################################
/### LLM PROVIDER
/################################################################################

/### OPENAI
/## OPENAI_API_KEY - OpenAI API Key (Example: my-openai-api-key)
/## TEMPERATURE - Sets temperature in OpenAI (Default: 0)
/## USE_AZURE - Use Azure OpenAI or not (Default: False)
OPENAI_API_KEY=[REDACTED]
/# TEMPERATURE=0
/# USE_AZURE=False

/### AZURE
/# moved to azure.yaml.template

/################################################################################
/### LLM MODELS
/################################################################################

/## SMART_LLM_MODEL - Smart language model (Default: gpt-4)
/## FAST_LLM_MODEL - Fast language model (Default: gpt-3.5-turbo)
/# SMART_LLM_MODEL=gpt-4
/# FAST_LLM_MODEL=gpt-3.5-turbo

/### LLM MODEL SETTINGS
/## FAST_TOKEN_LIMIT - Fast token limit for OpenAI (Default: 4000)
/## SMART_TOKEN_LIMIT - Smart token limit for OpenAI (Default: 8000)
/## When using --gpt3only this needs to be set to 4000.
/# FAST_TOKEN_LIMIT=4000
/# SMART_TOKEN_LIMIT=8000

/### EMBEDDINGS
/## EMBEDDING_MODEL - Model to use for creating embeddings
/## EMBEDDING_TOKENIZER - Tokenizer to use for chunking large inputs
/## EMBEDDING_TOKEN_LIMIT - Chunk size limit for large inputs
/# EMBEDDING_MODEL=text-embedding-ada-002
/# EMBEDDING_TOKENIZER=cl100k_base
/# EMBEDDING_TOKEN_LIMIT=8191

/################################################################################
/### MEMORY
/################################################################################

/### MEMORY_BACKEND - Memory backend type
/## local - Default
/## pinecone - Pinecone (if configured)
/## redis - Redis (if configured)
/## milvus - Milvus (if configured - also works with Zilliz)
/## MEMORY_INDEX - Name of index created in Memory backend (Default: auto-gpt)
/# MEMORY_BACKEND=local
/# MEMORY_INDEX=auto-gpt

/### PINECONE
/## PINECONE_API_KEY - Pinecone API Key (Example: my-pinecone-api-key)
/## PINECONE_ENV - Pinecone environment (region) (Example: us-west-2)
/# PINECONE_API_KEY=your-pinecone-api-key
/# PINECONE_ENV=your-pinecone-region

/### REDIS
/## REDIS_HOST - Redis host (Default: localhost, use "redis" for docker-compose)
/## REDIS_PORT - Redis port (Default: 6379)
/## REDIS_PASSWORD - Redis password (Default: "")
/## WIPE_REDIS_ON_START - Wipes data / index on start (Default: True)
/# REDIS_HOST=localhost
/# REDIS_PORT=6379
/# REDIS_PASSWORD=
/# WIPE_REDIS_ON_START=True

/### WEAVIATE
/## MEMORY_BACKEND - Use 'weaviate' to use Weaviate vector storage
/## WEAVIATE_HOST - Weaviate host IP
/## WEAVIATE_PORT - Weaviate host port
/## WEAVIATE_PROTOCOL - Weaviate host protocol (e.g. 'http')
/## USE_WEAVIATE_EMBEDDED - Whether to use Embedded Weaviate
/## WEAVIATE_EMBEDDED_PATH - File system path were to persist data when running Embedded Weaviate
/## WEAVIATE_USERNAME - Weaviate username
/## WEAVIATE_PASSWORD - Weaviate password
/## WEAVIATE_API_KEY - Weaviate API key if using API-key-based authentication
/# WEAVIATE_HOST="127.0.0.1"
/# WEAVIATE_PORT=8080
/# WEAVIATE_PROTOCOL="http"
/# USE_WEAVIATE_EMBEDDED=False
/# WEAVIATE_EMBEDDED_PATH="/home/me/.local/share/weaviate"
/# WEAVIATE_USERNAME=
/# WEAVIATE_PASSWORD=
/# WEAVIATE_API_KEY=

/### MILVUS
/## MILVUS_ADDR - Milvus remote address (e.g. localhost:19530, https://xxx-xxxx.xxxx.xxxx.zillizcloud.com:443)
/## MILVUS_USERNAME - username for your Milvus database
/## MILVUS_PASSWORD - password for your Milvus database
/## MILVUS_SECURE - True to enable TLS. (Default: False)
/## Setting MILVUS_ADDR to a https:// URL will override this setting.
/## MILVUS_COLLECTION - Milvus collection, change it if you want to start a new memory and retain the old memory.
/# MILVUS_ADDR=localhost:19530
/# MILVUS_USERNAME=
/# MILVUS_PASSWORD=
/# MILVUS_SECURE=
/# MILVUS_COLLECTION=autogpt

/################################################################################
/### IMAGE GENERATION PROVIDER
/################################################################################

/### OPEN AI
/## IMAGE_PROVIDER - Image provider (Example: dalle)
/## IMAGE_SIZE - Image size (Example: 256)
/## DALLE: 256, 512, 1024
/# IMAGE_PROVIDER=dalle
/# IMAGE_SIZE=256

/### HUGGINGFACE
/## HUGGINGFACE_IMAGE_MODEL - Text-to-image model from Huggingface (Default: CompVis/stable-diffusion-v1-4)
/## HUGGINGFACE_API_TOKEN - HuggingFace API token (Example: my-huggingface-api-token)
/# HUGGINGFACE_IMAGE_MODEL=CompVis/stable-diffusion-v1-4
/# HUGGINGFACE_API_TOKEN=your-huggingface-api-token

/### STABLE DIFFUSION WEBUI
/## SD_WEBUI_AUTH - Stable diffusion webui username:password pair (Example: username:password)
/## SD_WEBUI_URL - Stable diffusion webui API URL (Example: http://127.0.0.1:7860)
/# SD_WEBUI_AUTH=
/# SD_WEBUI_URL=http://127.0.0.1:7860

/################################################################################
/### AUDIO TO TEXT PROVIDER
/################################################################################

/### HUGGINGFACE
/# HUGGINGFACE_AUDIO_TO_TEXT_MODEL=facebook/wav2vec2-base-960h

/################################################################################
/### GIT Provider for repository actions
/################################################################################

/### GITHUB
/## GITHUB_API_KEY - Github API key / PAT (Example: github_pat_123)
/## GITHUB_USERNAME - Github username
/# GITHUB_API_KEY=github_pat_123
/# GITHUB_USERNAME=your-github-username

/################################################################################
/### WEB BROWSING
/################################################################################

/### BROWSER
/## HEADLESS_BROWSER - Whether to run the browser in headless mode (default: True)
/## USE_WEB_BROWSER - Sets the web-browser driver to use with selenium (default: chrome).
/## Note: set this to either 'chrome', 'firefox', or 'safari' depending on your current browser
/# HEADLESS_BROWSER=True
/# USE_WEB_BROWSER=chrome
/## BROWSE_CHUNK_MAX_LENGTH - When browsing website, define the length of chunks to summarize (in number of tokens, excluding the response. 75 % of FAST_TOKEN_LIMIT is usually wise )
/# BROWSE_CHUNK_MAX_LENGTH=3000
/## BROWSE_SPACY_LANGUAGE_MODEL is used to split sentences. Install additional languages via pip, and set the model name here. Example Chinese: python -m spacy download zh_core_web_sm
/# BROWSE_SPACY_LANGUAGE_MODEL=en_core_web_sm

/### GOOGLE
/## GOOGLE_API_KEY - Google API key (Example: my-google-api-key)
/## CUSTOM_SEARCH_ENGINE_ID - Custom search engine ID (Example: my-custom-search-engine-id)
/# GOOGLE_API_KEY=your-google-api-key
/# CUSTOM_SEARCH_ENGINE_ID=your-custom-search-engine-id

/################################################################################
/### TTS PROVIDER
/################################################################################

/### MAC OS
/## USE_MAC_OS_TTS - Use Mac OS TTS or not (Default: False)
/# USE_MAC_OS_TTS=False

/### STREAMELEMENTS
/## USE_BRIAN_TTS - Use Brian TTS or not (Default: False)
/# USE_BRIAN_TTS=False

/### ELEVENLABS
/## ELEVENLABS_API_KEY - Eleven Labs API key (Example: my-elevenlabs-api-key)
/## ELEVENLABS_VOICE_1_ID - Eleven Labs voice 1 ID (Example: my-voice-id-1)
/## ELEVENLABS_VOICE_2_ID - Eleven Labs voice 2 ID (Example: my-voice-id-2)
/# ELEVENLABS_API_KEY=your-elevenlabs-api-key
/# ELEVENLABS_VOICE_1_ID=your-voice-id-1
/# ELEVENLABS_VOICE_2_ID=your-voice-id-2

/################################################################################
/### TWITTER API
/################################################################################

/# TW_CONSUMER_KEY=
/# TW_CONSUMER_SECRET=
/# TW_ACCESS_TOKEN=
/# TW_ACCESS_TOKEN_SECRET=

/################################################################################
/### ALLOWLISTED PLUGINS
/################################################################################

/#ALLOWLISTED_PLUGINS - Sets the listed plugins that are allowed (Example: plugin1,plugin2,plugin3)
ALLOWLISTED_PLUGINS=

/################################################################################
/### CHAT PLUGIN SETTINGS
/################################################################################
/# CHAT_MESSAGES_ENABLED - Enable chat messages (Default: False)
/# CHAT_MESSAGES_ENABLED=False

@anonhostpi
Copy link

Can you use ```s? Markdown is trying to parse your #'s

@Nantris
Copy link

Nantris commented May 9, 2023

Unfortunately the ``` don't work inside details but I just cleaned it up. Thanks so much for your time and assistance @anonhostpi.

@anonhostpi
Copy link

Interesting...

@anonhostpi
Copy link

anonhostpi commented May 10, 2023

Oh I just realized that this is a continuation of #3977. Since the original issue here is not supported, and there's already another thread for you, let's continue on the other one

Closing issue as the region of China is not supported.

- my discord username: anonhostpi

@anonhostpi anonhostpi closed this as not planned Won't fix, can't repro, duplicate, stale May 10, 2023
@niuhuluzhihao
Copy link

I am in China.How can i use it ?

@guacamole-hunter
Copy link

vpn probably and set it to europe or us as location

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
API access Trouble with connecting to the API
Projects
None yet
Development

No branches or pull requests

7 participants