Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

windows10不可用 #3

Closed
allrobot opened this issue Mar 4, 2023 · 1 comment
Closed

windows10不可用 #3

allrobot opened this issue Mar 4, 2023 · 1 comment

Comments

@allrobot
Copy link

allrobot commented Mar 4, 2023

输入python gptcli.py -p https://gpt.pawan.krd/backend-api/conversation,报错:

(chatgpt) G:\[Notes]\chatgpt-demo\gptcli>python gptcli.py -p https://gpt.pawan.krd/backend-api/conversation
Loading key from G:[Notes]\chatgpt-demo\gptcli\.key
Using proxy: https://gpt.pawan.krd/backend-api/conversation
Attach response in prompt: False
Stream mode: True
�[1;33mInput:�[0m hello?
C:\Users\li\.conda\envs\chatgpt\lib\site-packages\aiohttp\connector.py:899: RuntimeWarning: An HTTPS request is being
sent through an HTTPS proxy. This support for TLS in TLS is known to be disabled in the stdlib asyncio. This is why
you'll probably see an error in the log below.

It is possible to enable it via monkeypatching under Python 3.7 or higher. For more details, see:
* https://bugs.python.org/issue37179
* https://github.com/python/cpython/pull/28073

You can temporarily patch this as follows:
* https://docs.aiohttp.org/en/stable/client_advanced.html#proxy-support
* https://github.com/aio-libs/aiohttp/discussions/6044

  _, proto = await self._create_proxy_connection(req, traces, timeout)
RuntimeWarning: Enable tracemalloc to get the object allocation traceback

Traceback (most recent call last):
  File "C:\Users\li\.conda\envs\chatgpt\lib\site-packages\openai\api_requestor.py", line 587, in arequest_raw
    result = await session.request(**request_kwargs)
  File "C:\Users\li\.conda\envs\chatgpt\lib\site-packages\aiohttp\client.py", line 536, in _request
    conn = await self._connector.connect(
  File "C:\Users\li\.conda\envs\chatgpt\lib\site-packages\aiohttp\connector.py", line 540, in connect
    proto = await self._create_connection(req, traces, timeout)
  File "C:\Users\li\.conda\envs\chatgpt\lib\site-packages\aiohttp\connector.py", line 899, in _create_connection
    _, proto = await self._create_proxy_connection(req, traces, timeout)
  File "C:\Users\li\.conda\envs\chatgpt\lib\site-packages\aiohttp\connector.py", line 1288, in _create_proxy_connection
    raise ClientHttpProxyError(
aiohttp.client_exceptions.ClientHttpProxyError: 400, message='Bad Request', url=URL('https://gpt.pawan.krd/backend-api/conversation')

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "G:\[Notes]\chatgpt-demo\gptcli\gptcli.py", line 161, in <module>
    answer = asyncio.run(query_openai_stream(data))
  File "C:\Users\li\.conda\envs\chatgpt\lib\asyncio\runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "C:\Users\li\.conda\envs\chatgpt\lib\asyncio\base_events.py", line 641, in run_until_complete
    return future.result()
  File "G:\[Notes]\chatgpt-demo\gptcli\gptcli.py", line 52, in query_openai_stream
    async for part in await openai.ChatCompletion.acreate(
  File "C:\Users\li\.conda\envs\chatgpt\lib\site-packages\openai\api_resources\chat_completion.py", line 45, in acreate
    return await super().acreate(*args, **kwargs)
  File "C:\Users\li\.conda\envs\chatgpt\lib\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 217, in acreate
    response, _, api_key = await requestor.arequest(
  File "C:\Users\li\.conda\envs\chatgpt\lib\site-packages\openai\api_requestor.py", line 300, in arequest
    result = await self.arequest_raw(
  File "C:\Users\li\.conda\envs\chatgpt\lib\site-packages\openai\api_requestor.py", line 604, in arequest_raw
    raise error.APIConnectionError("Error communicating with OpenAI") from e
openai.error.APIConnectionError: Error communicating with OpenAI
Exception ignored in: <function _ProactorBasePipeTransport.__del__ at 0x000002AC05F29A20>
Traceback (most recent call last):
  File "C:\Users\li\.conda\envs\chatgpt\lib\asyncio\proactor_events.py", line 116, in __del__
  File "C:\Users\li\.conda\envs\chatgpt\lib\asyncio\proactor_events.py", line 108, in close
  File "C:\Users\li\.conda\envs\chatgpt\lib\asyncio\base_events.py", line 745, in call_soon
  File "C:\Users\li\.conda\envs\chatgpt\lib\asyncio\base_events.py", line 510, in _check_closed
RuntimeError: Event loop is closed

(chatgpt) G:\[Notes]\chatgpt-demo\gptcli>

输入python gptcli.py -r,报错:

(chatgpt) G:\[Notes]\chatgpt-demo\gptcli>python gptcli.py -r
Loading key from G:[Notes]\chatgpt-demo\gptcli\.key
Attach response in prompt: True
Stream mode: True
�[1;33mInput:�[0m
�[1;33mInput:�[0m Hello?

Traceback (most recent call last):
  File "C:\Users\li\.conda\envs\chatgpt\lib\site-packages\aiohttp\connector.py", line 980, in _wrap_create_connection
    return await self._loop.create_connection(*args, **kwargs)  # type: ignore[return-value]  # noqa
  File "C:\Users\li\.conda\envs\chatgpt\lib\asyncio\base_events.py", line 1055, in create_connection
    raise exceptions[0]
  File "C:\Users\li\.conda\envs\chatgpt\lib\asyncio\base_events.py", line 1040, in create_connection
    sock = await self._connect_sock(
  File "C:\Users\li\.conda\envs\chatgpt\lib\asyncio\base_events.py", line 954, in _connect_sock
    await self.sock_connect(sock, address)
  File "C:\Users\li\.conda\envs\chatgpt\lib\asyncio\proactor_events.py", line 704, in sock_connect
    return await self._proactor.connect(sock, address)
  File "C:\Users\li\.conda\envs\chatgpt\lib\asyncio\windows_events.py", line 812, in _poll
    value = callback(transferred, key, ov)
  File "C:\Users\li\.conda\envs\chatgpt\lib\asyncio\windows_events.py", line 599, in finish_connect
    ov.getresult()
OSError: [WinError 121] 信号灯超时时间已到

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\Users\li\.conda\envs\chatgpt\lib\site-packages\openai\api_requestor.py", line 587, in arequest_raw
    result = await session.request(**request_kwargs)
  File "C:\Users\li\.conda\envs\chatgpt\lib\site-packages\aiohttp\client.py", line 536, in _request
    conn = await self._connector.connect(
  File "C:\Users\li\.conda\envs\chatgpt\lib\site-packages\aiohttp\connector.py", line 540, in connect
    proto = await self._create_connection(req, traces, timeout)
  File "C:\Users\li\.conda\envs\chatgpt\lib\site-packages\aiohttp\connector.py", line 901, in _create_connection
    _, proto = await self._create_direct_connection(req, traces, timeout)
  File "C:\Users\li\.conda\envs\chatgpt\lib\site-packages\aiohttp\connector.py", line 1206, in _create_direct_connection
    raise last_exc
  File "C:\Users\li\.conda\envs\chatgpt\lib\site-packages\aiohttp\connector.py", line 1175, in _create_direct_connection
    transp, proto = await self._wrap_create_connection(
  File "C:\Users\li\.conda\envs\chatgpt\lib\site-packages\aiohttp\connector.py", line 988, in _wrap_create_connection
    raise client_error(req.connection_key, exc) from exc
aiohttp.client_exceptions.ClientConnectorError: Cannot connect to host api.openai.com:443 ssl:default [信号灯超时时间已 到]

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "G:\[Notes]\chatgpt-demo\gptcli\gptcli.py", line 161, in <module>
    answer = asyncio.run(query_openai_stream(data))
  File "C:\Users\li\.conda\envs\chatgpt\lib\asyncio\runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "C:\Users\li\.conda\envs\chatgpt\lib\asyncio\base_events.py", line 641, in run_until_complete
    return future.result()
  File "G:\[Notes]\chatgpt-demo\gptcli\gptcli.py", line 52, in query_openai_stream
    async for part in await openai.ChatCompletion.acreate(
  File "C:\Users\li\.conda\envs\chatgpt\lib\site-packages\openai\api_resources\chat_completion.py", line 45, in acreate
    return await super().acreate(*args, **kwargs)
  File "C:\Users\li\.conda\envs\chatgpt\lib\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 217, in acreate
    response, _, api_key = await requestor.arequest(
  File "C:\Users\li\.conda\envs\chatgpt\lib\site-packages\openai\api_requestor.py", line 300, in arequest
    result = await self.arequest_raw(
  File "C:\Users\li\.conda\envs\chatgpt\lib\site-packages\openai\api_requestor.py", line 604, in arequest_raw
    raise error.APIConnectionError("Error communicating with OpenAI") from e
openai.error.APIConnectionError: Error communicating with OpenAI

难不成要挂代理才能用?😂

@evilpan
Copy link
Owner

evilpan commented Mar 5, 2023

-p https://gpt.pawan.krd/backend-api/conversation

这个地址是合法的 HTTP 代理吗?-p 指向的是你的代理服务器,可以是 HTTP/HTTPS 代理,也可以是 socks4a/socks5 代理。

难不成要挂代理才能用?

目前看来是这样的,因为 api.openai.com 这个域名已经被墙了。

@evilpan evilpan closed this as completed Mar 5, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants